2026-03-08T23:11:09.616 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-08T23:11:09.620 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-08T23:11:09.640 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/295 branch: squid description: orch:cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} supported-container-hosts$/{ubuntu_22.04} workloads/cephadm_iscsi} email: null first_in_suite: false flavor: default job_id: '295' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps no_nested_subset: false openstack: - machine: cpus: 1 disk: 40 ram: 8000 volumes: count: 4 size: 30 os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon warn on pool no app: false mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - MON_DOWN sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - mon.a - mgr.x - osd.0 - osd.1 - client.0 - ceph.iscsi.iscsi.a - - mon.b - osd.2 - osd.3 - osd.4 - client.1 - - mon.c - osd.5 - osd.6 - osd.7 - client.2 - ceph.iscsi.iscsi.b seed: 8017 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch:cephadm suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm02.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPUub//aFU3yewQjVOH+esp4yb+b6/vNXDUih8S1SDnUNFdxpVuX5asyrEclgJn95aM+TFWYUbCVcdvSZZXh8CA= vm04.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF2K4BGne0Xay+g0YZJUb4Kkf8QRv85ndId2/f5043MLhEFid8ybDyUSBhPSL8h4lomO/zHexKKF4YFf9/KzrHs= vm10.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPBIlo/WHWXhQCG+L0BfJ9yQ6URGdrPxLyJNccnaPRd1oNehSNdvAqIoRjxcV8Tj0SdFXFNpl8X3J6ZdZsZC02o= tasks: - cephadm: null - cephadm.shell: host.a: - ceph orch status - ceph orch ps - ceph orch ls - ceph orch host ls - ceph orch device ls - install: extra_system_packages: deb: - open-iscsi - multipath-tools rpm: - iscsi-initiator-utils - device-mapper-multipath - ceph_iscsi_client: clients: - client.1 - cram: clients: client.0: - src/test/cli-integration/rbd/gwcli_create.t client.1: - src/test/cli-integration/rbd/iscsi_client.t client.2: - src/test/cli-integration/rbd/gwcli_delete.t parallel: false - cram: clients: client.0: - src/test/cli-integration/rbd/rest_api_create.t client.1: - src/test/cli-integration/rbd/iscsi_client.t client.2: - src/test/cli-integration/rbd/rest_api_delete.t parallel: false teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-08_22:22:45 tube: vps use_shaman: true user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-08T23:11:09.640 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-08T23:11:09.641 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-08T23:11:09.641 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-08T23:11:09.641 INFO:teuthology.task.internal:Checking packages... 2026-03-08T23:11:09.641 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-08T23:11:09.641 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-08T23:11:09.641 INFO:teuthology.packaging:ref: None 2026-03-08T23:11:09.641 INFO:teuthology.packaging:tag: None 2026-03-08T23:11:09.641 INFO:teuthology.packaging:branch: squid 2026-03-08T23:11:09.641 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T23:11:09.641 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-08T23:11:10.302 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-08T23:11:10.303 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-08T23:11:10.304 INFO:teuthology.task.internal:no buildpackages task found 2026-03-08T23:11:10.304 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-08T23:11:10.304 INFO:teuthology.task.internal:Saving configuration 2026-03-08T23:11:10.308 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-08T23:11:10.309 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-08T23:11:10.316 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm02.local', 'description': '/archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/295', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-08 23:09:40.358609', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:02', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPUub//aFU3yewQjVOH+esp4yb+b6/vNXDUih8S1SDnUNFdxpVuX5asyrEclgJn95aM+TFWYUbCVcdvSZZXh8CA='} 2026-03-08T23:11:10.320 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm04.local', 'description': '/archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/295', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-08 23:09:40.359190', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:04', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF2K4BGne0Xay+g0YZJUb4Kkf8QRv85ndId2/f5043MLhEFid8ybDyUSBhPSL8h4lomO/zHexKKF4YFf9/KzrHs='} 2026-03-08T23:11:10.325 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm10.local', 'description': '/archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/295', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-08 23:09:40.358989', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:0a', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPBIlo/WHWXhQCG+L0BfJ9yQ6URGdrPxLyJNccnaPRd1oNehSNdvAqIoRjxcV8Tj0SdFXFNpl8X3J6ZdZsZC02o='} 2026-03-08T23:11:10.325 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-08T23:11:10.326 INFO:teuthology.task.internal:roles: ubuntu@vm02.local - ['host.a', 'mon.a', 'mgr.x', 'osd.0', 'osd.1', 'client.0', 'ceph.iscsi.iscsi.a'] 2026-03-08T23:11:10.326 INFO:teuthology.task.internal:roles: ubuntu@vm04.local - ['mon.b', 'osd.2', 'osd.3', 'osd.4', 'client.1'] 2026-03-08T23:11:10.326 INFO:teuthology.task.internal:roles: ubuntu@vm10.local - ['mon.c', 'osd.5', 'osd.6', 'osd.7', 'client.2', 'ceph.iscsi.iscsi.b'] 2026-03-08T23:11:10.326 INFO:teuthology.run_tasks:Running task console_log... 2026-03-08T23:11:10.331 DEBUG:teuthology.task.console_log:vm02 does not support IPMI; excluding 2026-03-08T23:11:10.337 DEBUG:teuthology.task.console_log:vm04 does not support IPMI; excluding 2026-03-08T23:11:10.342 DEBUG:teuthology.task.console_log:vm10 does not support IPMI; excluding 2026-03-08T23:11:10.343 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7fc3999b8a60>, signals=[15]) 2026-03-08T23:11:10.343 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-08T23:11:10.343 INFO:teuthology.task.internal:Opening connections... 2026-03-08T23:11:10.343 DEBUG:teuthology.task.internal:connecting to ubuntu@vm02.local 2026-03-08T23:11:10.344 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm02.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-08T23:11:10.402 DEBUG:teuthology.task.internal:connecting to ubuntu@vm04.local 2026-03-08T23:11:10.403 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm04.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-08T23:11:10.462 DEBUG:teuthology.task.internal:connecting to ubuntu@vm10.local 2026-03-08T23:11:10.462 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm10.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-08T23:11:10.519 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-08T23:11:10.520 DEBUG:teuthology.orchestra.run.vm02:> uname -m 2026-03-08T23:11:10.524 INFO:teuthology.orchestra.run.vm02.stdout:x86_64 2026-03-08T23:11:10.524 DEBUG:teuthology.orchestra.run.vm02:> cat /etc/os-release 2026-03-08T23:11:10.567 INFO:teuthology.orchestra.run.vm02.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-08T23:11:10.567 INFO:teuthology.orchestra.run.vm02.stdout:NAME="Ubuntu" 2026-03-08T23:11:10.567 INFO:teuthology.orchestra.run.vm02.stdout:VERSION_ID="22.04" 2026-03-08T23:11:10.567 INFO:teuthology.orchestra.run.vm02.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-08T23:11:10.567 INFO:teuthology.orchestra.run.vm02.stdout:VERSION_CODENAME=jammy 2026-03-08T23:11:10.567 INFO:teuthology.orchestra.run.vm02.stdout:ID=ubuntu 2026-03-08T23:11:10.567 INFO:teuthology.orchestra.run.vm02.stdout:ID_LIKE=debian 2026-03-08T23:11:10.567 INFO:teuthology.orchestra.run.vm02.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-08T23:11:10.567 INFO:teuthology.orchestra.run.vm02.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-08T23:11:10.567 INFO:teuthology.orchestra.run.vm02.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-08T23:11:10.567 INFO:teuthology.orchestra.run.vm02.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-08T23:11:10.567 INFO:teuthology.orchestra.run.vm02.stdout:UBUNTU_CODENAME=jammy 2026-03-08T23:11:10.567 INFO:teuthology.lock.ops:Updating vm02.local on lock server 2026-03-08T23:11:10.571 DEBUG:teuthology.orchestra.run.vm04:> uname -m 2026-03-08T23:11:10.574 INFO:teuthology.orchestra.run.vm04.stdout:x86_64 2026-03-08T23:11:10.574 DEBUG:teuthology.orchestra.run.vm04:> cat /etc/os-release 2026-03-08T23:11:10.618 INFO:teuthology.orchestra.run.vm04.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-08T23:11:10.619 INFO:teuthology.orchestra.run.vm04.stdout:NAME="Ubuntu" 2026-03-08T23:11:10.619 INFO:teuthology.orchestra.run.vm04.stdout:VERSION_ID="22.04" 2026-03-08T23:11:10.619 INFO:teuthology.orchestra.run.vm04.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-08T23:11:10.619 INFO:teuthology.orchestra.run.vm04.stdout:VERSION_CODENAME=jammy 2026-03-08T23:11:10.619 INFO:teuthology.orchestra.run.vm04.stdout:ID=ubuntu 2026-03-08T23:11:10.619 INFO:teuthology.orchestra.run.vm04.stdout:ID_LIKE=debian 2026-03-08T23:11:10.619 INFO:teuthology.orchestra.run.vm04.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-08T23:11:10.619 INFO:teuthology.orchestra.run.vm04.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-08T23:11:10.619 INFO:teuthology.orchestra.run.vm04.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-08T23:11:10.619 INFO:teuthology.orchestra.run.vm04.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-08T23:11:10.619 INFO:teuthology.orchestra.run.vm04.stdout:UBUNTU_CODENAME=jammy 2026-03-08T23:11:10.619 INFO:teuthology.lock.ops:Updating vm04.local on lock server 2026-03-08T23:11:10.623 DEBUG:teuthology.orchestra.run.vm10:> uname -m 2026-03-08T23:11:10.626 INFO:teuthology.orchestra.run.vm10.stdout:x86_64 2026-03-08T23:11:10.626 DEBUG:teuthology.orchestra.run.vm10:> cat /etc/os-release 2026-03-08T23:11:10.670 INFO:teuthology.orchestra.run.vm10.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-08T23:11:10.670 INFO:teuthology.orchestra.run.vm10.stdout:NAME="Ubuntu" 2026-03-08T23:11:10.670 INFO:teuthology.orchestra.run.vm10.stdout:VERSION_ID="22.04" 2026-03-08T23:11:10.670 INFO:teuthology.orchestra.run.vm10.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-08T23:11:10.670 INFO:teuthology.orchestra.run.vm10.stdout:VERSION_CODENAME=jammy 2026-03-08T23:11:10.670 INFO:teuthology.orchestra.run.vm10.stdout:ID=ubuntu 2026-03-08T23:11:10.670 INFO:teuthology.orchestra.run.vm10.stdout:ID_LIKE=debian 2026-03-08T23:11:10.670 INFO:teuthology.orchestra.run.vm10.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-08T23:11:10.670 INFO:teuthology.orchestra.run.vm10.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-08T23:11:10.670 INFO:teuthology.orchestra.run.vm10.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-08T23:11:10.670 INFO:teuthology.orchestra.run.vm10.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-08T23:11:10.670 INFO:teuthology.orchestra.run.vm10.stdout:UBUNTU_CODENAME=jammy 2026-03-08T23:11:10.670 INFO:teuthology.lock.ops:Updating vm10.local on lock server 2026-03-08T23:11:10.674 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-08T23:11:10.676 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-08T23:11:10.677 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-08T23:11:10.677 DEBUG:teuthology.orchestra.run.vm02:> test '!' -e /home/ubuntu/cephtest 2026-03-08T23:11:10.679 DEBUG:teuthology.orchestra.run.vm04:> test '!' -e /home/ubuntu/cephtest 2026-03-08T23:11:10.680 DEBUG:teuthology.orchestra.run.vm10:> test '!' -e /home/ubuntu/cephtest 2026-03-08T23:11:10.714 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-08T23:11:10.715 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-08T23:11:10.715 DEBUG:teuthology.orchestra.run.vm02:> test -z $(ls -A /var/lib/ceph) 2026-03-08T23:11:10.725 DEBUG:teuthology.orchestra.run.vm04:> test -z $(ls -A /var/lib/ceph) 2026-03-08T23:11:10.726 DEBUG:teuthology.orchestra.run.vm10:> test -z $(ls -A /var/lib/ceph) 2026-03-08T23:11:10.727 INFO:teuthology.orchestra.run.vm02.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-08T23:11:10.728 INFO:teuthology.orchestra.run.vm04.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-08T23:11:10.758 INFO:teuthology.orchestra.run.vm10.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-08T23:11:10.759 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-08T23:11:10.767 DEBUG:teuthology.orchestra.run.vm02:> test -e /ceph-qa-ready 2026-03-08T23:11:10.770 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:11:10.997 DEBUG:teuthology.orchestra.run.vm04:> test -e /ceph-qa-ready 2026-03-08T23:11:10.999 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:11:11.238 DEBUG:teuthology.orchestra.run.vm10:> test -e /ceph-qa-ready 2026-03-08T23:11:11.241 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:11:11.642 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-08T23:11:11.643 INFO:teuthology.task.internal:Creating test directory... 2026-03-08T23:11:11.644 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-08T23:11:11.645 DEBUG:teuthology.orchestra.run.vm04:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-08T23:11:11.646 DEBUG:teuthology.orchestra.run.vm10:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-08T23:11:11.649 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-08T23:11:11.650 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-08T23:11:11.652 INFO:teuthology.task.internal:Creating archive directory... 2026-03-08T23:11:11.652 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-08T23:11:11.692 DEBUG:teuthology.orchestra.run.vm04:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-08T23:11:11.694 DEBUG:teuthology.orchestra.run.vm10:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-08T23:11:11.699 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-08T23:11:11.700 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-08T23:11:11.700 DEBUG:teuthology.orchestra.run.vm02:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-08T23:11:11.738 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:11:11.738 DEBUG:teuthology.orchestra.run.vm04:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-08T23:11:11.741 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:11:11.741 DEBUG:teuthology.orchestra.run.vm10:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-08T23:11:11.743 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:11:11.744 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-08T23:11:11.780 DEBUG:teuthology.orchestra.run.vm04:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-08T23:11:11.784 DEBUG:teuthology.orchestra.run.vm10:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-08T23:11:11.787 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T23:11:11.792 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T23:11:11.792 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T23:11:11.794 INFO:teuthology.orchestra.run.vm10.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T23:11:11.796 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T23:11:11.798 INFO:teuthology.orchestra.run.vm10.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T23:11:11.799 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-08T23:11:11.801 INFO:teuthology.task.internal:Configuring sudo... 2026-03-08T23:11:11.801 DEBUG:teuthology.orchestra.run.vm02:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-08T23:11:11.836 DEBUG:teuthology.orchestra.run.vm04:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-08T23:11:11.840 DEBUG:teuthology.orchestra.run.vm10:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-08T23:11:11.848 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-08T23:11:11.851 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-08T23:11:11.851 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-08T23:11:11.888 DEBUG:teuthology.orchestra.run.vm04:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-08T23:11:11.889 DEBUG:teuthology.orchestra.run.vm10:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-08T23:11:11.893 DEBUG:teuthology.orchestra.run.vm02:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-08T23:11:11.934 DEBUG:teuthology.orchestra.run.vm02:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-08T23:11:11.978 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-08T23:11:11.978 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-08T23:11:12.027 DEBUG:teuthology.orchestra.run.vm04:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-08T23:11:12.030 DEBUG:teuthology.orchestra.run.vm04:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-08T23:11:12.074 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-08T23:11:12.074 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-08T23:11:12.123 DEBUG:teuthology.orchestra.run.vm10:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-08T23:11:12.126 DEBUG:teuthology.orchestra.run.vm10:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-08T23:11:12.169 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-08T23:11:12.170 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-08T23:11:12.218 DEBUG:teuthology.orchestra.run.vm02:> sudo service rsyslog restart 2026-03-08T23:11:12.219 DEBUG:teuthology.orchestra.run.vm04:> sudo service rsyslog restart 2026-03-08T23:11:12.220 DEBUG:teuthology.orchestra.run.vm10:> sudo service rsyslog restart 2026-03-08T23:11:12.274 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-08T23:11:12.275 INFO:teuthology.task.internal:Starting timer... 2026-03-08T23:11:12.275 INFO:teuthology.run_tasks:Running task pcp... 2026-03-08T23:11:12.278 INFO:teuthology.run_tasks:Running task selinux... 2026-03-08T23:11:12.281 INFO:teuthology.task.selinux:Excluding vm02: VMs are not yet supported 2026-03-08T23:11:12.281 INFO:teuthology.task.selinux:Excluding vm04: VMs are not yet supported 2026-03-08T23:11:12.281 INFO:teuthology.task.selinux:Excluding vm10: VMs are not yet supported 2026-03-08T23:11:12.281 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-08T23:11:12.281 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-08T23:11:12.281 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-08T23:11:12.281 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-08T23:11:12.282 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-08T23:11:12.283 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-08T23:11:12.288 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-08T23:11:12.869 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-08T23:11:12.875 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-08T23:11:12.876 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryp2byvx8y --limit vm02.local,vm04.local,vm10.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-08T23:14:22.359 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm02.local'), Remote(name='ubuntu@vm04.local'), Remote(name='ubuntu@vm10.local')] 2026-03-08T23:14:22.359 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm02.local' 2026-03-08T23:14:22.360 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm02.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-08T23:14:22.421 DEBUG:teuthology.orchestra.run.vm02:> true 2026-03-08T23:14:22.493 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm02.local' 2026-03-08T23:14:22.493 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm04.local' 2026-03-08T23:14:22.493 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm04.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-08T23:14:22.552 DEBUG:teuthology.orchestra.run.vm04:> true 2026-03-08T23:14:22.624 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm04.local' 2026-03-08T23:14:22.624 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm10.local' 2026-03-08T23:14:22.625 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm10.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-08T23:14:22.689 DEBUG:teuthology.orchestra.run.vm10:> true 2026-03-08T23:14:22.912 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm10.local' 2026-03-08T23:14:22.913 INFO:teuthology.run_tasks:Running task clock... 2026-03-08T23:14:22.915 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-08T23:14:22.915 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-08T23:14:22.915 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-08T23:14:22.917 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-08T23:14:22.917 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-08T23:14:22.918 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-08T23:14:22.918 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-08T23:14:22.933 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-08T23:14:22.933 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: Command line: ntpd -gq 2026-03-08T23:14:22.933 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: ---------------------------------------------------- 2026-03-08T23:14:22.934 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: ntp-4 is maintained by Network Time Foundation, 2026-03-08T23:14:22.934 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-08T23:14:22.934 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: corporation. Support and training for ntp-4 are 2026-03-08T23:14:22.934 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: available at https://www.nwtime.org/support 2026-03-08T23:14:22.934 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: ---------------------------------------------------- 2026-03-08T23:14:22.934 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: proto: precision = 0.030 usec (-25) 2026-03-08T23:14:22.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: basedate set to 2022-02-04 2026-03-08T23:14:22.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: gps base set to 2022-02-06 (week 2196) 2026-03-08T23:14:22.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-08T23:14:22.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-08T23:14:22.935 INFO:teuthology.orchestra.run.vm04.stderr: 8 Mar 23:14:22 ntpd[16037]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 71 days ago 2026-03-08T23:14:22.936 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: Listen and drop on 0 v6wildcard [::]:123 2026-03-08T23:14:22.936 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-08T23:14:22.936 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: Listen normally on 2 lo 127.0.0.1:123 2026-03-08T23:14:22.936 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: Listen normally on 3 ens3 192.168.123.104:123 2026-03-08T23:14:22.936 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: Listen normally on 4 lo [::1]:123 2026-03-08T23:14:22.936 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:4%2]:123 2026-03-08T23:14:22.937 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:22 ntpd[16037]: Listening on routing socket on fd #22 for interface updates 2026-03-08T23:14:22.938 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-08T23:14:22.938 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: Command line: ntpd -gq 2026-03-08T23:14:22.938 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: ---------------------------------------------------- 2026-03-08T23:14:22.938 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: ntp-4 is maintained by Network Time Foundation, 2026-03-08T23:14:22.938 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-08T23:14:22.938 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: corporation. Support and training for ntp-4 are 2026-03-08T23:14:22.938 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: available at https://www.nwtime.org/support 2026-03-08T23:14:22.938 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: ---------------------------------------------------- 2026-03-08T23:14:22.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: proto: precision = 0.031 usec (-25) 2026-03-08T23:14:22.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: basedate set to 2022-02-04 2026-03-08T23:14:22.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: gps base set to 2022-02-06 (week 2196) 2026-03-08T23:14:22.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-08T23:14:22.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-08T23:14:22.939 INFO:teuthology.orchestra.run.vm02.stderr: 8 Mar 23:14:22 ntpd[16052]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 71 days ago 2026-03-08T23:14:22.940 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: Listen and drop on 0 v6wildcard [::]:123 2026-03-08T23:14:22.940 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-08T23:14:22.940 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: Listen normally on 2 lo 127.0.0.1:123 2026-03-08T23:14:22.940 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: Listen normally on 3 ens3 192.168.123.102:123 2026-03-08T23:14:22.940 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: Listen normally on 4 lo [::1]:123 2026-03-08T23:14:22.940 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:2%2]:123 2026-03-08T23:14:22.941 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:22 ntpd[16052]: Listening on routing socket on fd #22 for interface updates 2026-03-08T23:14:22.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-08T23:14:22.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: Command line: ntpd -gq 2026-03-08T23:14:22.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: ---------------------------------------------------- 2026-03-08T23:14:22.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: ntp-4 is maintained by Network Time Foundation, 2026-03-08T23:14:22.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-08T23:14:22.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: corporation. Support and training for ntp-4 are 2026-03-08T23:14:22.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: available at https://www.nwtime.org/support 2026-03-08T23:14:22.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: ---------------------------------------------------- 2026-03-08T23:14:22.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: proto: precision = 0.029 usec (-25) 2026-03-08T23:14:22.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: basedate set to 2022-02-04 2026-03-08T23:14:22.974 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: gps base set to 2022-02-06 (week 2196) 2026-03-08T23:14:22.974 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-08T23:14:22.974 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-08T23:14:22.974 INFO:teuthology.orchestra.run.vm10.stderr: 8 Mar 23:14:22 ntpd[15965]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 71 days ago 2026-03-08T23:14:22.975 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: Listen and drop on 0 v6wildcard [::]:123 2026-03-08T23:14:22.975 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-08T23:14:22.975 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: Listen normally on 2 lo 127.0.0.1:123 2026-03-08T23:14:22.975 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: Listen normally on 3 ens3 192.168.123.110:123 2026-03-08T23:14:22.975 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: Listen normally on 4 lo [::1]:123 2026-03-08T23:14:22.975 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:a%2]:123 2026-03-08T23:14:22.975 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:22 ntpd[15965]: Listening on routing socket on fd #22 for interface updates 2026-03-08T23:14:23.936 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:23 ntpd[16037]: Soliciting pool server 88.198.34.135 2026-03-08T23:14:23.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:23 ntpd[16052]: Soliciting pool server 88.198.34.135 2026-03-08T23:14:23.974 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:23 ntpd[15965]: Soliciting pool server 185.41.106.152 2026-03-08T23:14:24.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:24 ntpd[16037]: Soliciting pool server 213.239.234.28 2026-03-08T23:14:24.936 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:24 ntpd[16037]: Soliciting pool server 78.47.56.71 2026-03-08T23:14:24.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:24 ntpd[16052]: Soliciting pool server 213.239.234.28 2026-03-08T23:14:24.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:24 ntpd[16052]: Soliciting pool server 78.47.56.71 2026-03-08T23:14:24.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:24 ntpd[15965]: Soliciting pool server 88.198.34.135 2026-03-08T23:14:24.974 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:24 ntpd[15965]: Soliciting pool server 77.42.16.222 2026-03-08T23:14:25.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:25 ntpd[16037]: Soliciting pool server 135.125.205.191 2026-03-08T23:14:25.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:25 ntpd[16037]: Soliciting pool server 195.201.173.232 2026-03-08T23:14:25.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:25 ntpd[16037]: Soliciting pool server 185.233.107.180 2026-03-08T23:14:25.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:25 ntpd[16052]: Soliciting pool server 135.125.205.191 2026-03-08T23:14:25.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:25 ntpd[16052]: Soliciting pool server 195.201.173.232 2026-03-08T23:14:25.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:25 ntpd[16052]: Soliciting pool server 185.233.107.180 2026-03-08T23:14:25.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:25 ntpd[15965]: Soliciting pool server 78.47.56.71 2026-03-08T23:14:25.974 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:25 ntpd[15965]: Soliciting pool server 213.239.234.28 2026-03-08T23:14:25.974 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:25 ntpd[15965]: Soliciting pool server 144.76.167.162 2026-03-08T23:14:26.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:26 ntpd[16037]: Soliciting pool server 85.121.52.237 2026-03-08T23:14:26.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:26 ntpd[16037]: Soliciting pool server 94.130.23.46 2026-03-08T23:14:26.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:26 ntpd[16037]: Soliciting pool server 185.41.106.152 2026-03-08T23:14:26.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:26 ntpd[16037]: Soliciting pool server 77.90.0.148 2026-03-08T23:14:26.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:26 ntpd[16052]: Soliciting pool server 85.121.52.237 2026-03-08T23:14:26.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:26 ntpd[16052]: Soliciting pool server 185.41.106.152 2026-03-08T23:14:26.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:26 ntpd[16052]: Soliciting pool server 77.90.0.148 2026-03-08T23:14:26.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:26 ntpd[15965]: Soliciting pool server 185.233.107.180 2026-03-08T23:14:26.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:26 ntpd[15965]: Soliciting pool server 135.125.205.191 2026-03-08T23:14:26.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:26 ntpd[15965]: Soliciting pool server 195.201.173.232 2026-03-08T23:14:26.974 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:26 ntpd[15965]: Soliciting pool server 185.248.189.10 2026-03-08T23:14:27.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:27 ntpd[16037]: Soliciting pool server 51.75.67.47 2026-03-08T23:14:27.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:27 ntpd[16037]: Soliciting pool server 49.12.199.148 2026-03-08T23:14:27.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:27 ntpd[16037]: Soliciting pool server 77.42.16.222 2026-03-08T23:14:27.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:27 ntpd[16037]: Soliciting pool server 185.125.190.58 2026-03-08T23:14:27.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:27 ntpd[16052]: Soliciting pool server 51.75.67.47 2026-03-08T23:14:27.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:27 ntpd[16052]: Soliciting pool server 49.12.199.148 2026-03-08T23:14:27.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:27 ntpd[16052]: Soliciting pool server 185.125.190.58 2026-03-08T23:14:27.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:27 ntpd[15965]: Soliciting pool server 77.90.0.148 2026-03-08T23:14:27.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:27 ntpd[15965]: Soliciting pool server 85.121.52.237 2026-03-08T23:14:27.974 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:27 ntpd[15965]: Soliciting pool server 91.189.91.157 2026-03-08T23:14:28.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:28 ntpd[16037]: Soliciting pool server 185.125.190.57 2026-03-08T23:14:28.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:28 ntpd[16037]: Soliciting pool server 152.53.191.142 2026-03-08T23:14:28.935 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:28 ntpd[16037]: Soliciting pool server 144.76.167.162 2026-03-08T23:14:28.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:28 ntpd[16052]: Soliciting pool server 185.125.190.57 2026-03-08T23:14:28.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:28 ntpd[16052]: Soliciting pool server 152.53.191.142 2026-03-08T23:14:28.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:28 ntpd[16052]: Soliciting pool server 144.76.167.162 2026-03-08T23:14:28.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:28 ntpd[15965]: Soliciting pool server 185.125.190.58 2026-03-08T23:14:28.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:28 ntpd[15965]: Soliciting pool server 51.75.67.47 2026-03-08T23:14:28.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:28 ntpd[15965]: Soliciting pool server 49.12.199.148 2026-03-08T23:14:29.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:29 ntpd[16052]: Soliciting pool server 185.125.190.56 2026-03-08T23:14:29.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:29 ntpd[16052]: Soliciting pool server 185.248.189.10 2026-03-08T23:14:29.939 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:29 ntpd[16052]: Soliciting pool server 2a01:4f8:201:3433::123 2026-03-08T23:14:29.973 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:29 ntpd[15965]: Soliciting pool server 185.125.190.57 2026-03-08T23:14:29.974 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:29 ntpd[15965]: Soliciting pool server 152.53.191.142 2026-03-08T23:14:29.974 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:29 ntpd[15965]: Soliciting pool server 2001:1640:3::3 2026-03-08T23:14:30.959 INFO:teuthology.orchestra.run.vm04.stdout: 8 Mar 23:14:30 ntpd[16037]: ntpd: time slew -0.000098 s 2026-03-08T23:14:30.959 INFO:teuthology.orchestra.run.vm04.stdout:ntpd: time slew -0.000098s 2026-03-08T23:14:30.963 INFO:teuthology.orchestra.run.vm02.stdout: 8 Mar 23:14:30 ntpd[16052]: ntpd: time slew -0.000858 s 2026-03-08T23:14:30.963 INFO:teuthology.orchestra.run.vm02.stdout:ntpd: time slew -0.000858s 2026-03-08T23:14:30.980 INFO:teuthology.orchestra.run.vm04.stdout: remote refid st t when poll reach delay offset jitter 2026-03-08T23:14:30.980 INFO:teuthology.orchestra.run.vm04.stdout:============================================================================== 2026-03-08T23:14:30.980 INFO:teuthology.orchestra.run.vm04.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:14:30.980 INFO:teuthology.orchestra.run.vm04.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:14:30.980 INFO:teuthology.orchestra.run.vm04.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:14:30.980 INFO:teuthology.orchestra.run.vm04.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:14:30.980 INFO:teuthology.orchestra.run.vm04.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:14:30.985 INFO:teuthology.orchestra.run.vm02.stdout: remote refid st t when poll reach delay offset jitter 2026-03-08T23:14:30.985 INFO:teuthology.orchestra.run.vm02.stdout:============================================================================== 2026-03-08T23:14:30.985 INFO:teuthology.orchestra.run.vm02.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:14:30.985 INFO:teuthology.orchestra.run.vm02.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:14:30.985 INFO:teuthology.orchestra.run.vm02.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:14:30.985 INFO:teuthology.orchestra.run.vm02.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:14:30.985 INFO:teuthology.orchestra.run.vm02.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:14:32.997 INFO:teuthology.orchestra.run.vm10.stdout: 8 Mar 23:14:32 ntpd[15965]: ntpd: time slew -0.000851 s 2026-03-08T23:14:32.997 INFO:teuthology.orchestra.run.vm10.stdout:ntpd: time slew -0.000851s 2026-03-08T23:14:33.016 INFO:teuthology.orchestra.run.vm10.stdout: remote refid st t when poll reach delay offset jitter 2026-03-08T23:14:33.016 INFO:teuthology.orchestra.run.vm10.stdout:============================================================================== 2026-03-08T23:14:33.017 INFO:teuthology.orchestra.run.vm10.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:14:33.017 INFO:teuthology.orchestra.run.vm10.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:14:33.017 INFO:teuthology.orchestra.run.vm10.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:14:33.017 INFO:teuthology.orchestra.run.vm10.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:14:33.017 INFO:teuthology.orchestra.run.vm10.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:14:33.017 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-08T23:14:33.068 INFO:tasks.cephadm:Config: {'conf': {'global': {'mon warn on pool no app': False}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'MON_DOWN'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-08T23:14:33.068 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T23:14:33.069 INFO:tasks.cephadm:Cluster fsid is 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:14:33.069 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-08T23:14:33.069 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.102', 'mon.b': '192.168.123.104', 'mon.c': '192.168.123.110'} 2026-03-08T23:14:33.069 INFO:tasks.cephadm:First mon is mon.a on vm02 2026-03-08T23:14:33.069 INFO:tasks.cephadm:First mgr is x 2026-03-08T23:14:33.069 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-08T23:14:33.069 DEBUG:teuthology.orchestra.run.vm02:> sudo hostname $(hostname -s) 2026-03-08T23:14:33.077 DEBUG:teuthology.orchestra.run.vm04:> sudo hostname $(hostname -s) 2026-03-08T23:14:33.085 DEBUG:teuthology.orchestra.run.vm10:> sudo hostname $(hostname -s) 2026-03-08T23:14:33.093 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-08T23:14:33.093 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T23:14:33.703 INFO:tasks.cephadm:builder_project result: [{'url': 'https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'chacra_url': 'https://1.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'ubuntu', 'distro_version': '22.04', 'distro_codename': 'jammy', 'modified': '2026-02-25 19:37:07.680480', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678-ge911bdeb-1jammy', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.98+toko08', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-08T23:14:34.302 INFO:tasks.util.chacra:got chacra host 1.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=ubuntu%2F22.04%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T23:14:34.303 INFO:tasks.cephadm:Discovered cachra url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-08T23:14:34.303 INFO:tasks.cephadm:Downloading cephadm from url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-08T23:14:34.303 DEBUG:teuthology.orchestra.run.vm02:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-08T23:14:35.796 INFO:teuthology.orchestra.run.vm02.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 8 23:14 /home/ubuntu/cephtest/cephadm 2026-03-08T23:14:35.796 DEBUG:teuthology.orchestra.run.vm04:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-08T23:14:37.106 INFO:teuthology.orchestra.run.vm04.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 8 23:14 /home/ubuntu/cephtest/cephadm 2026-03-08T23:14:37.106 DEBUG:teuthology.orchestra.run.vm10:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-08T23:14:38.466 INFO:teuthology.orchestra.run.vm10.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 8 23:14 /home/ubuntu/cephtest/cephadm 2026-03-08T23:14:38.466 DEBUG:teuthology.orchestra.run.vm02:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-08T23:14:38.470 DEBUG:teuthology.orchestra.run.vm04:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-08T23:14:38.475 DEBUG:teuthology.orchestra.run.vm10:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-08T23:14:38.483 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-08T23:14:38.483 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-08T23:14:38.514 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-08T23:14:38.517 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-08T23:14:38.604 INFO:teuthology.orchestra.run.vm02.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-08T23:14:38.609 INFO:teuthology.orchestra.run.vm04.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-08T23:14:38.611 INFO:teuthology.orchestra.run.vm10.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-08T23:15:39.736 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-08T23:15:39.737 INFO:teuthology.orchestra.run.vm02.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-08T23:15:39.737 INFO:teuthology.orchestra.run.vm02.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-08T23:15:39.737 INFO:teuthology.orchestra.run.vm02.stdout: "repo_digests": [ 2026-03-08T23:15:39.737 INFO:teuthology.orchestra.run.vm02.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-08T23:15:39.737 INFO:teuthology.orchestra.run.vm02.stdout: ] 2026-03-08T23:15:39.737 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-08T23:15:43.425 INFO:teuthology.orchestra.run.vm04.stdout:{ 2026-03-08T23:15:43.425 INFO:teuthology.orchestra.run.vm04.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-08T23:15:43.425 INFO:teuthology.orchestra.run.vm04.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-08T23:15:43.425 INFO:teuthology.orchestra.run.vm04.stdout: "repo_digests": [ 2026-03-08T23:15:43.425 INFO:teuthology.orchestra.run.vm04.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-08T23:15:43.425 INFO:teuthology.orchestra.run.vm04.stdout: ] 2026-03-08T23:15:43.425 INFO:teuthology.orchestra.run.vm04.stdout:} 2026-03-08T23:15:48.382 INFO:teuthology.orchestra.run.vm10.stdout:{ 2026-03-08T23:15:48.382 INFO:teuthology.orchestra.run.vm10.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-08T23:15:48.382 INFO:teuthology.orchestra.run.vm10.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-08T23:15:48.382 INFO:teuthology.orchestra.run.vm10.stdout: "repo_digests": [ 2026-03-08T23:15:48.382 INFO:teuthology.orchestra.run.vm10.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-08T23:15:48.382 INFO:teuthology.orchestra.run.vm10.stdout: ] 2026-03-08T23:15:48.382 INFO:teuthology.orchestra.run.vm10.stdout:} 2026-03-08T23:15:48.394 DEBUG:teuthology.orchestra.run.vm02:> sudo mkdir -p /etc/ceph 2026-03-08T23:15:48.402 DEBUG:teuthology.orchestra.run.vm04:> sudo mkdir -p /etc/ceph 2026-03-08T23:15:48.409 DEBUG:teuthology.orchestra.run.vm10:> sudo mkdir -p /etc/ceph 2026-03-08T23:15:48.417 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod 777 /etc/ceph 2026-03-08T23:15:48.451 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod 777 /etc/ceph 2026-03-08T23:15:48.458 DEBUG:teuthology.orchestra.run.vm10:> sudo chmod 777 /etc/ceph 2026-03-08T23:15:48.465 INFO:tasks.cephadm:Writing seed config... 2026-03-08T23:15:48.466 INFO:tasks.cephadm: override: [global] mon warn on pool no app = False 2026-03-08T23:15:48.466 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-08T23:15:48.466 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-08T23:15:48.466 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-08T23:15:48.466 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-08T23:15:48.466 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-08T23:15:48.466 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-08T23:15:48.466 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-08T23:15:48.466 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-08T23:15:48.466 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-08T23:15:48.466 DEBUG:teuthology.orchestra.run.vm02:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-08T23:15:48.495 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 91105a84-1b44-11f1-9a43-e95894f13987 mon warn on pool no app = False [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-08T23:15:48.495 DEBUG:teuthology.orchestra.run.vm02:mon.a> sudo journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.a.service 2026-03-08T23:15:48.538 DEBUG:teuthology.orchestra.run.vm02:mgr.x> sudo journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@mgr.x.service 2026-03-08T23:15:48.581 INFO:tasks.cephadm:Bootstrapping... 2026-03-08T23:15:48.581 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 91105a84-1b44-11f1-9a43-e95894f13987 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.102 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-08T23:15:48.710 INFO:teuthology.orchestra.run.vm02.stdout:-------------------------------------------------------------------------------- 2026-03-08T23:15:48.711 INFO:teuthology.orchestra.run.vm02.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '91105a84-1b44-11f1-9a43-e95894f13987', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'x', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.102', '--skip-admin-label'] 2026-03-08T23:15:48.711 INFO:teuthology.orchestra.run.vm02.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-08T23:15:48.711 INFO:teuthology.orchestra.run.vm02.stdout:Verifying podman|docker is present... 2026-03-08T23:15:48.711 INFO:teuthology.orchestra.run.vm02.stdout:Verifying lvm2 is present... 2026-03-08T23:15:48.711 INFO:teuthology.orchestra.run.vm02.stdout:Verifying time synchronization is in place... 2026-03-08T23:15:48.714 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-08T23:15:48.714 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-08T23:15:48.716 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-08T23:15:48.716 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-08T23:15:48.718 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-08T23:15:48.718 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-08T23:15:48.720 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-08T23:15:48.720 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-08T23:15:48.722 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-08T23:15:48.722 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout masked 2026-03-08T23:15:48.724 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-08T23:15:48.724 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-08T23:15:48.726 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-08T23:15:48.726 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-08T23:15:48.728 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-08T23:15:48.728 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-08T23:15:48.731 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout enabled 2026-03-08T23:15:48.734 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout active 2026-03-08T23:15:48.734 INFO:teuthology.orchestra.run.vm02.stdout:Unit ntp.service is enabled and running 2026-03-08T23:15:48.734 INFO:teuthology.orchestra.run.vm02.stdout:Repeating the final host check... 2026-03-08T23:15:48.734 INFO:teuthology.orchestra.run.vm02.stdout:docker (/usr/bin/docker) is present 2026-03-08T23:15:48.734 INFO:teuthology.orchestra.run.vm02.stdout:systemctl is present 2026-03-08T23:15:48.734 INFO:teuthology.orchestra.run.vm02.stdout:lvcreate is present 2026-03-08T23:15:48.736 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-08T23:15:48.736 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-08T23:15:48.738 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-08T23:15:48.738 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-08T23:15:48.740 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-08T23:15:48.740 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-08T23:15:48.743 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-08T23:15:48.743 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-08T23:15:48.745 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-08T23:15:48.745 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout masked 2026-03-08T23:15:48.747 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-08T23:15:48.747 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-08T23:15:48.749 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-08T23:15:48.750 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-08T23:15:48.752 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-08T23:15:48.752 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout inactive 2026-03-08T23:15:48.754 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout enabled 2026-03-08T23:15:48.757 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stdout active 2026-03-08T23:15:48.757 INFO:teuthology.orchestra.run.vm02.stdout:Unit ntp.service is enabled and running 2026-03-08T23:15:48.757 INFO:teuthology.orchestra.run.vm02.stdout:Host looks OK 2026-03-08T23:15:48.757 INFO:teuthology.orchestra.run.vm02.stdout:Cluster fsid: 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:15:48.757 INFO:teuthology.orchestra.run.vm02.stdout:Acquiring lock 140488415939168 on /run/cephadm/91105a84-1b44-11f1-9a43-e95894f13987.lock 2026-03-08T23:15:48.757 INFO:teuthology.orchestra.run.vm02.stdout:Lock 140488415939168 acquired on /run/cephadm/91105a84-1b44-11f1-9a43-e95894f13987.lock 2026-03-08T23:15:48.757 INFO:teuthology.orchestra.run.vm02.stdout:Verifying IP 192.168.123.102 port 3300 ... 2026-03-08T23:15:48.757 INFO:teuthology.orchestra.run.vm02.stdout:Verifying IP 192.168.123.102 port 6789 ... 2026-03-08T23:15:48.757 INFO:teuthology.orchestra.run.vm02.stdout:Base mon IP(s) is [192.168.123.102:3300, 192.168.123.102:6789], mon addrv is [v2:192.168.123.102:3300,v1:192.168.123.102:6789] 2026-03-08T23:15:48.759 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.102 metric 100 2026-03-08T23:15:48.759 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-08T23:15:48.759 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.102 metric 100 2026-03-08T23:15:48.759 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.102 metric 100 2026-03-08T23:15:48.760 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-08T23:15:48.760 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-08T23:15:48.761 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-08T23:15:48.761 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-08T23:15:48.761 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-08T23:15:48.761 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-08T23:15:48.761 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:2/64 scope link 2026-03-08T23:15:48.761 INFO:teuthology.orchestra.run.vm02.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-08T23:15:48.761 INFO:teuthology.orchestra.run.vm02.stdout:Mon IP `192.168.123.102` is in CIDR network `192.168.123.0/24` 2026-03-08T23:15:48.761 INFO:teuthology.orchestra.run.vm02.stdout:Mon IP `192.168.123.102` is in CIDR network `192.168.123.0/24` 2026-03-08T23:15:48.761 INFO:teuthology.orchestra.run.vm02.stdout:Mon IP `192.168.123.102` is in CIDR network `192.168.123.1/32` 2026-03-08T23:15:48.761 INFO:teuthology.orchestra.run.vm02.stdout:Mon IP `192.168.123.102` is in CIDR network `192.168.123.1/32` 2026-03-08T23:15:48.761 INFO:teuthology.orchestra.run.vm02.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-08T23:15:48.762 INFO:teuthology.orchestra.run.vm02.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-08T23:15:48.762 INFO:teuthology.orchestra.run.vm02.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-08T23:15:49.768 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/docker: stdout e911bdebe5c8faa3800735d1568fcdca65db60df: Pulling from ceph-ci/ceph 2026-03-08T23:15:49.768 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/docker: stdout Digest: sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-08T23:15:49.768 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/docker: stdout Status: Image is up to date for quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T23:15:49.768 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/docker: stdout quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T23:15:50.240 INFO:teuthology.orchestra.run.vm02.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-08T23:15:50.240 INFO:teuthology.orchestra.run.vm02.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-08T23:15:50.240 INFO:teuthology.orchestra.run.vm02.stdout:Extracting ceph user uid/gid from container image... 2026-03-08T23:15:50.536 INFO:teuthology.orchestra.run.vm02.stdout:stat: stdout 167 167 2026-03-08T23:15:50.536 INFO:teuthology.orchestra.run.vm02.stdout:Creating initial keys... 2026-03-08T23:15:51.058 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-authtool: stdout AQAmA65pxcNLJhAA0XQ0pyDGPhErgUmKJjJhmQ== 2026-03-08T23:15:51.595 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-authtool: stdout AQAnA65pFt/+HxAAlQrWKFHS5YUvXJYPtEP1Mg== 2026-03-08T23:15:51.805 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-authtool: stdout AQAnA65pQLPBLBAAP+rIZOv7CwXt0EDGyu/6Qg== 2026-03-08T23:15:51.806 INFO:teuthology.orchestra.run.vm02.stdout:Creating initial monmap... 2026-03-08T23:15:52.025 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-08T23:15:52.025 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-08T23:15:52.025 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:15:52.025 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-08T23:15:52.025 INFO:teuthology.orchestra.run.vm02.stdout:monmaptool for a [v2:192.168.123.102:3300,v1:192.168.123.102:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-08T23:15:52.025 INFO:teuthology.orchestra.run.vm02.stdout:setting min_mon_release = quincy 2026-03-08T23:15:52.025 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: set fsid to 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:15:52.025 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-08T23:15:52.026 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:15:52.026 INFO:teuthology.orchestra.run.vm02.stdout:Creating mon... 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.187+0000 7f15f11a1d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.187+0000 7f15f11a1d80 1 imported monmap: 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-08T23:15:51.971315+0000 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.187+0000 7f15f11a1d80 0 /usr/bin/ceph-mon: set fsid to 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Git sha 0 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: DB SUMMARY 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: DB Session ID: 0K5NLSUYC22O9G3GX6LK 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.error_if_exists: 0 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.create_if_missing: 1 2026-03-08T23:15:52.351 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.env: 0x55625bd48dc0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.info_log: 0x5562646feda0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.statistics: (nil) 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.use_fsync: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.db_log_dir: 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.wal_dir: 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.write_buffer_manager: 0x5562646f55e0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.unordered_write: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.row_cache: None 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.wal_filter: None 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.two_write_queues: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.wal_compression: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.atomic_flush: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-08T23:15:52.352 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.max_open_files: -1 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Compression algorithms supported: 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: kZSTD supported: 0 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: kXpressCompression supported: 0 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: kZlibCompression supported: 1 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.199+0000 7f15f11a1d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.211+0000 7f15f11a1d80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.merge_operator: 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compaction_filter: None 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562646f1520) 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x556264717350 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-08T23:15:52.353 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compression: NoCompression 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.num_levels: 7 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-08T23:15:52.354 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.bloom_locality: 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.ttl: 2592000 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.enable_blob_files: false 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.min_blob_size: 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f5df6ba6-3e98-4b10-80b1-3a1e755ab7a8 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.223+0000 7f15f11a1d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.235+0000 7f15f11a1d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556264718e00 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.235+0000 7f15f11a1d80 4 rocksdb: DB pointer 0x5562647fc000 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.235+0000 7f15e892b640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.235+0000 7f15e892b640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-08T23:15:52.355 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x556264717350#7 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.243+0000 7f15f11a1d80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.243+0000 7f15f11a1d80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T23:15:52.243+0000 7f15f11a1d80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-08T23:15:52.356 INFO:teuthology.orchestra.run.vm02.stdout:create mon.a on 2026-03-08T23:15:52.748 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-08T23:15:52.926 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-91105a84-1b44-11f1-9a43-e95894f13987.target → /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987.target. 2026-03-08T23:15:52.926 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-91105a84-1b44-11f1-9a43-e95894f13987.target → /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987.target. 2026-03-08T23:15:53.115 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.a 2026-03-08T23:15:53.115 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to reset failed state of unit ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.a.service: Unit ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.a.service not loaded. 2026-03-08T23:15:53.288 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987.target.wants/ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.a.service → /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service. 2026-03-08T23:15:53.296 INFO:teuthology.orchestra.run.vm02.stdout:firewalld does not appear to be present 2026-03-08T23:15:53.296 INFO:teuthology.orchestra.run.vm02.stdout:Not possible to enable service . firewalld.service is not available 2026-03-08T23:15:53.296 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mon to start... 2026-03-08T23:15:53.296 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mon... 2026-03-08T23:15:53.741 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout cluster: 2026-03-08T23:15:53.741 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout id: 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:15:53.741 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-08T23:15:53.741 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-08T23:15:53.741 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout services: 2026-03-08T23:15:53.741 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.202836s) 2026-03-08T23:15:53.741 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-08T23:15:53.741 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-08T23:15:53.741 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-08T23:15:53.741 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout data: 2026-03-08T23:15:53.742 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-08T23:15:53.742 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-08T23:15:53.742 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-08T23:15:53.742 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout pgs: 2026-03-08T23:15:53.742 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-08T23:15:53.742 INFO:teuthology.orchestra.run.vm02.stdout:mon is available 2026-03-08T23:15:53.742 INFO:teuthology.orchestra.run.vm02.stdout:Assimilating anything we can from ceph.conf... 2026-03-08T23:15:53.794 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:53 vm02 bash[16982]: cluster 2026-03-08T23:15:53.493027+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [global] 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout fsid = 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.102:3300,v1:192.168.123.102:6789] 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [osd] 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-08T23:15:53.953 INFO:teuthology.orchestra.run.vm02.stdout:Generating new minimal ceph.conf... 2026-03-08T23:15:54.169 INFO:teuthology.orchestra.run.vm02.stdout:Restarting the monitor... 2026-03-08T23:15:54.317 INFO:teuthology.orchestra.run.vm02.stdout:Setting public_network to 192.168.123.1/32,192.168.123.0/24 in mon config section 2026-03-08T23:15:54.393 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 systemd[1]: Stopping Ceph mon.a for 91105a84-1b44-11f1-9a43-e95894f13987... 2026-03-08T23:15:54.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[16982]: debug 2026-03-08T23:15:54.207+0000 7ff4ef899640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-08T23:15:54.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[16982]: debug 2026-03-08T23:15:54.207+0000 7ff4ef899640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-08T23:15:54.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17370]: ceph-91105a84-1b44-11f1-9a43-e95894f13987-mon-a 2026-03-08T23:15:54.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 systemd[1]: ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.a.service: Deactivated successfully. 2026-03-08T23:15:54.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 systemd[1]: Stopped Ceph mon.a for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:15:54.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 systemd[1]: Started Ceph mon.a for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:15:54.580 INFO:teuthology.orchestra.run.vm02.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-08T23:15:54.581 INFO:teuthology.orchestra.run.vm02.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-08T23:15:54.581 INFO:teuthology.orchestra.run.vm02.stdout:Creating mgr... 2026-03-08T23:15:54.581 INFO:teuthology.orchestra.run.vm02.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-08T23:15:54.581 INFO:teuthology.orchestra.run.vm02.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-08T23:15:54.716 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.439+0000 7f7983f75d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-08T23:15:54.716 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.439+0000 7f7983f75d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-08T23:15:54.716 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.439+0000 7f7983f75d80 0 pidfile_write: ignore empty --pid-file 2026-03-08T23:15:54.716 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 0 load: jerasure load: lrc 2026-03-08T23:15:54.716 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-08T23:15:54.716 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Git sha 0 2026-03-08T23:15:54.716 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-08T23:15:54.716 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: DB SUMMARY 2026-03-08T23:15:54.716 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: DB Session ID: C59M1HNU4P9SXU9WBA3L 2026-03-08T23:15:54.716 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: CURRENT file: CURRENT 2026-03-08T23:15:54.716 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-08T23:15:54.716 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 75491 ; 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.error_if_exists: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.create_if_missing: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.env: 0x556f0a466dc0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.info_log: 0x556f15fded00 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.statistics: (nil) 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.use_fsync: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.db_log_dir: 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.wal_dir: 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.write_buffer_manager: 0x556f15fe3900 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-08T23:15:54.717 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.unordered_write: 0 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.row_cache: None 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.wal_filter: None 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.two_write_queues: 0 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.wal_compression: 0 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.atomic_flush: 0 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-08T23:15:54.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_open_files: -1 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Compression algorithms supported: 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: kZSTD supported: 0 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: kXpressCompression supported: 0 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: kZlibCompression supported: 1 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.merge_operator: 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compaction_filter: None 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556f15fde480) 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cache_index_and_filter_blocks: 1 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: pin_top_level_index_and_filter: 1 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: index_type: 0 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: data_block_index_type: 0 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: index_shortening: 1 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: data_block_hash_table_util_ratio: 0.750000 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: checksum: 4 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: no_block_cache: 0 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: block_cache: 0x556f16005350 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: block_cache_name: BinnedLRUCache 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: block_cache_options: 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: capacity : 536870912 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: num_shard_bits : 4 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: strict_capacity_limit : 0 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: high_pri_pool_ratio: 0.000 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: block_cache_compressed: (nil) 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: persistent_cache: (nil) 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: block_size: 4096 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: block_size_deviation: 10 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: block_restart_interval: 16 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: index_block_restart_interval: 1 2026-03-08T23:15:54.719 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: metadata_block_size: 4096 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: partition_filters: 0 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: use_delta_encoding: 1 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: filter_policy: bloomfilter 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: whole_key_filtering: 1 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: verify_compression: 0 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: read_amp_bytes_per_bit: 0 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: format_version: 5 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: enable_index_compression: 1 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: block_align: 0 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: max_auto_readahead_size: 262144 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: prepopulate_block_cache: 0 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: initial_auto_readahead_size: 8192 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: num_file_reads_for_auto_readahead: 2 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compression: NoCompression 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.num_levels: 7 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-08T23:15:54.720 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.bloom_locality: 0 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.ttl: 2592000 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.enable_blob_files: false 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.min_blob_size: 0 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.443+0000 7f7983f75d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.447+0000 7f7983f75d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.447+0000 7f7983f75d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.447+0000 7f7983f75d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f5df6ba6-3e98-4b10-80b1-3a1e755ab7a8 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.447+0000 7f7983f75d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773011754454163, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.447+0000 7f7983f75d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.451+0000 7f7983f75d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773011754455168, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 225, "table_properties": {"data_size": 70836, "index_size": 178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9693, "raw_average_key_size": 49, "raw_value_size": 65342, "raw_average_value_size": 333, "num_data_blocks": 8, "num_entries": 196, "num_filter_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773011754, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f5df6ba6-3e98-4b10-80b1-3a1e755ab7a8", "db_session_id": "C59M1HNU4P9SXU9WBA3L", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.451+0000 7f7983f75d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773011754455219, "job": 1, "event": "recovery_finished"} 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.451+0000 7f7983f75d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.451+0000 7f7983f75d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.451+0000 7f7983f75d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556f16006e00 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.451+0000 7f7983f75d80 4 rocksdb: DB pointer 0x556f16112000 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.451+0000 7f7983f75d80 0 starting mon.a rank 0 at public addrs [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] at bind addrs [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.455+0000 7f7983f75d80 1 mon.a@-1(???) e1 preinit fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.455+0000 7f7983f75d80 0 mon.a@-1(???).mds e1 new map 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.455+0000 7f7983f75d80 0 mon.a@-1(???).mds e1 print_map 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: e1 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: btime 2026-03-08T23:15:53:503174+0000 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: legacy client fscid: -1 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: No filesystems configured 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.455+0000 7f7983f75d80 0 mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.455+0000 7f7983f75d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.455+0000 7f7983f75d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.455+0000 7f7983f75d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-08T23:15:54.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: debug 2026-03-08T23:15:54.455+0000 7f7983f75d80 1 mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462527+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462527+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462554+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462554+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462561+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462561+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462565+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-08T23:15:51.971315+0000 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462565+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-08T23:15:51.971315+0000 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462581+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462581+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462585+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462585+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462588+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462588+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462592+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462592+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462854+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462854+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462868+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.462868+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.473862+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-08T23:15:54.722 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 bash[17457]: cluster 2026-03-08T23:15:54.473862+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-08T23:15:54.777 INFO:teuthology.orchestra.run.vm02.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-91105a84-1b44-11f1-9a43-e95894f13987@mgr.x 2026-03-08T23:15:54.777 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Failed to reset failed state of unit ceph-91105a84-1b44-11f1-9a43-e95894f13987@mgr.x.service: Unit ceph-91105a84-1b44-11f1-9a43-e95894f13987@mgr.x.service not loaded. 2026-03-08T23:15:54.959 INFO:teuthology.orchestra.run.vm02.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987.target.wants/ceph-91105a84-1b44-11f1-9a43-e95894f13987@mgr.x.service → /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service. 2026-03-08T23:15:54.967 INFO:teuthology.orchestra.run.vm02.stdout:firewalld does not appear to be present 2026-03-08T23:15:54.967 INFO:teuthology.orchestra.run.vm02.stdout:Not possible to enable service . firewalld.service is not available 2026-03-08T23:15:54.967 INFO:teuthology.orchestra.run.vm02.stdout:firewalld does not appear to be present 2026-03-08T23:15:54.967 INFO:teuthology.orchestra.run.vm02.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-08T23:15:54.967 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mgr to start... 2026-03-08T23:15:54.967 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mgr... 2026-03-08T23:15:54.987 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:54 vm02 systemd[1]: Started Ceph mgr.x for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:15:54.987 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:15:54.987 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:15:55.245 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:55 vm02 bash[17721]: debug 2026-03-08T23:15:55.199+0000 7faf7af87140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsid": "91105a84-1b44-11f1-9a43-e95894f13987", 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "health": { 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 0 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "a" 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "btime": "2026-03-08T23:15:53:503174+0000", 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "restful" 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T23:15:55.259 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modified": "2026-03-08T23:15:53.504086+0000", 2026-03-08T23:15:55.260 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-08T23:15:55.260 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:55.260 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-08T23:15:55.260 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-08T23:15:55.260 INFO:teuthology.orchestra.run.vm02.stdout:mgr not available, waiting (1/15)... 2026-03-08T23:15:55.537 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:55 vm02 bash[17721]: debug 2026-03-08T23:15:55.239+0000 7faf7af87140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-08T23:15:55.537 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:55 vm02 bash[17721]: debug 2026-03-08T23:15:55.359+0000 7faf7af87140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-08T23:15:55.893 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:55 vm02 bash[17721]: debug 2026-03-08T23:15:55.643+0000 7faf7af87140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-08T23:15:55.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:55 vm02 bash[17457]: audit 2026-03-08T23:15:54.531390+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.102:0/1703890835' entity='client.admin' 2026-03-08T23:15:55.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:55 vm02 bash[17457]: audit 2026-03-08T23:15:54.531390+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.102:0/1703890835' entity='client.admin' 2026-03-08T23:15:55.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:55 vm02 bash[17457]: audit 2026-03-08T23:15:55.190893+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.102:0/372855996' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T23:15:55.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:55 vm02 bash[17457]: audit 2026-03-08T23:15:55.190893+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.102:0/372855996' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T23:15:56.394 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:56 vm02 bash[17721]: debug 2026-03-08T23:15:56.083+0000 7faf7af87140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-08T23:15:56.394 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:56 vm02 bash[17721]: debug 2026-03-08T23:15:56.167+0000 7faf7af87140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-08T23:15:56.394 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:56 vm02 bash[17721]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-08T23:15:56.394 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:56 vm02 bash[17721]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-08T23:15:56.394 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:56 vm02 bash[17721]: from numpy import show_config as show_numpy_config 2026-03-08T23:15:56.394 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:56 vm02 bash[17721]: debug 2026-03-08T23:15:56.287+0000 7faf7af87140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-08T23:15:56.893 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:56 vm02 bash[17721]: debug 2026-03-08T23:15:56.427+0000 7faf7af87140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-08T23:15:56.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:56 vm02 bash[17721]: debug 2026-03-08T23:15:56.467+0000 7faf7af87140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-08T23:15:56.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:56 vm02 bash[17721]: debug 2026-03-08T23:15:56.507+0000 7faf7af87140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-08T23:15:56.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:56 vm02 bash[17721]: debug 2026-03-08T23:15:56.547+0000 7faf7af87140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-08T23:15:56.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:56 vm02 bash[17721]: debug 2026-03-08T23:15:56.595+0000 7faf7af87140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-08T23:15:57.274 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:57 vm02 bash[17721]: debug 2026-03-08T23:15:57.015+0000 7faf7af87140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-08T23:15:57.274 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:57 vm02 bash[17721]: debug 2026-03-08T23:15:57.047+0000 7faf7af87140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-08T23:15:57.274 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:57 vm02 bash[17721]: debug 2026-03-08T23:15:57.083+0000 7faf7af87140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-08T23:15:57.274 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:57 vm02 bash[17721]: debug 2026-03-08T23:15:57.223+0000 7faf7af87140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsid": "91105a84-1b44-11f1-9a43-e95894f13987", 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "health": { 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 0 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "a" 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-08T23:15:57.517 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "btime": "2026-03-08T23:15:53:503174+0000", 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "restful" 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modified": "2026-03-08T23:15:53.504086+0000", 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-08T23:15:57.518 INFO:teuthology.orchestra.run.vm02.stdout:mgr not available, waiting (2/15)... 2026-03-08T23:15:57.608 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:57 vm02 bash[17721]: debug 2026-03-08T23:15:57.271+0000 7faf7af87140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-08T23:15:57.608 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:57 vm02 bash[17721]: debug 2026-03-08T23:15:57.315+0000 7faf7af87140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-08T23:15:57.608 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:57 vm02 bash[17721]: debug 2026-03-08T23:15:57.443+0000 7faf7af87140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-08T23:15:57.608 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:57 vm02 bash[17457]: audit 2026-03-08T23:15:57.471512+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.102:0/3091593712' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T23:15:57.608 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:57 vm02 bash[17457]: audit 2026-03-08T23:15:57.471512+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.102:0/3091593712' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T23:15:57.893 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:57 vm02 bash[17721]: debug 2026-03-08T23:15:57.603+0000 7faf7af87140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-08T23:15:57.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:57 vm02 bash[17721]: debug 2026-03-08T23:15:57.767+0000 7faf7af87140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-08T23:15:57.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:57 vm02 bash[17721]: debug 2026-03-08T23:15:57.799+0000 7faf7af87140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-08T23:15:57.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:57 vm02 bash[17721]: debug 2026-03-08T23:15:57.843+0000 7faf7af87140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-08T23:15:58.272 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:57 vm02 bash[17721]: debug 2026-03-08T23:15:57.983+0000 7faf7af87140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-08T23:15:58.272 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:15:58 vm02 bash[17721]: debug 2026-03-08T23:15:58.199+0000 7faf7af87140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: cluster 2026-03-08T23:15:58.206276+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon x 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: cluster 2026-03-08T23:15:58.206276+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon x 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: cluster 2026-03-08T23:15:58.211069+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: x(active, starting, since 0.00485359s) 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: cluster 2026-03-08T23:15:58.211069+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: x(active, starting, since 0.00485359s) 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.213544+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.213544+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.213853+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.213853+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.214153+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.214153+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.214425+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.214425+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.214718+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.214718+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: cluster 2026-03-08T23:15:58.219971+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon x is now available 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: cluster 2026-03-08T23:15:58.219971+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon x is now available 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.228824+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.228824+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.229856+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' 2026-03-08T23:15:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.229856+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' 2026-03-08T23:15:58.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.232747+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' 2026-03-08T23:15:58.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.232747+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' 2026-03-08T23:15:58.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.233539+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:15:58.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.233539+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:15:58.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.235912+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' 2026-03-08T23:15:58.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:15:58 vm02 bash[17457]: audit 2026-03-08T23:15:58.235912+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' 2026-03-08T23:15:59.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-08T23:15:59.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-08T23:15:59.908 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsid": "91105a84-1b44-11f1-9a43-e95894f13987", 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "health": { 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 0 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "a" 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "btime": "2026-03-08T23:15:53:503174+0000", 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "restful" 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ], 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T23:15:59.909 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "modified": "2026-03-08T23:15:53.504086+0000", 2026-03-08T23:15:59.910 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-08T23:15:59.910 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout }, 2026-03-08T23:15:59.910 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-08T23:15:59.910 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-08T23:15:59.910 INFO:teuthology.orchestra.run.vm02.stdout:mgr is available 2026-03-08T23:16:00.271 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-08T23:16:00.271 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [global] 2026-03-08T23:16:00.271 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout fsid = 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:16:00.271 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-08T23:16:00.271 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.102:3300,v1:192.168.123.102:6789] 2026-03-08T23:16:00.271 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-08T23:16:00.271 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-08T23:16:00.271 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-08T23:16:00.271 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-08T23:16:00.271 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-08T23:16:00.271 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-08T23:16:00.272 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-08T23:16:00.272 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 2026-03-08T23:16:00.272 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout [osd] 2026-03-08T23:16:00.272 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-08T23:16:00.272 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-08T23:16:00.272 INFO:teuthology.orchestra.run.vm02.stdout:Enabling cephadm module... 2026-03-08T23:16:00.538 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:00 vm02 bash[17457]: cluster 2026-03-08T23:15:59.214178+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: x(active, since 1.00797s) 2026-03-08T23:16:00.538 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:00 vm02 bash[17457]: cluster 2026-03-08T23:15:59.214178+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: x(active, since 1.00797s) 2026-03-08T23:16:00.538 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:00 vm02 bash[17457]: audit 2026-03-08T23:15:59.841562+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.102:0/3138267933' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T23:16:00.538 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:00 vm02 bash[17457]: audit 2026-03-08T23:15:59.841562+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.102:0/3138267933' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T23:16:00.538 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:00 vm02 bash[17457]: audit 2026-03-08T23:16:00.202776+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.102:0/3900145137' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-08T23:16:00.538 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:00 vm02 bash[17457]: audit 2026-03-08T23:16:00.202776+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.102:0/3900145137' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-08T23:16:01.561 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:01 vm02 bash[17721]: ignoring --setuser ceph since I am not root 2026-03-08T23:16:01.561 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:01 vm02 bash[17721]: ignoring --setgroup ceph since I am not root 2026-03-08T23:16:01.561 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:01 vm02 bash[17721]: debug 2026-03-08T23:16:01.371+0000 7f96041f9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-08T23:16:01.561 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:01 vm02 bash[17721]: debug 2026-03-08T23:16:01.427+0000 7f96041f9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-08T23:16:01.561 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:01 vm02 bash[17457]: cluster 2026-03-08T23:16:00.235500+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: x(active, since 2s) 2026-03-08T23:16:01.561 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:01 vm02 bash[17457]: cluster 2026-03-08T23:16:00.235500+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: x(active, since 2s) 2026-03-08T23:16:01.561 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:01 vm02 bash[17457]: audit 2026-03-08T23:16:00.546156+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.102:0/411893538' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-08T23:16:01.561 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:01 vm02 bash[17457]: audit 2026-03-08T23:16:00.546156+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.102:0/411893538' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-08T23:16:01.659 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-08T23:16:01.659 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-08T23:16:01.659 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-08T23:16:01.659 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "active_name": "x", 2026-03-08T23:16:01.659 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-08T23:16:01.659 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-08T23:16:01.659 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for the mgr to restart... 2026-03-08T23:16:01.659 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mgr epoch 5... 2026-03-08T23:16:01.893 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:01 vm02 bash[17721]: debug 2026-03-08T23:16:01.555+0000 7f96041f9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-08T23:16:02.246 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:01 vm02 bash[17721]: debug 2026-03-08T23:16:01.891+0000 7f96041f9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-08T23:16:02.511 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:02 vm02 bash[17721]: debug 2026-03-08T23:16:02.315+0000 7f96041f9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-08T23:16:02.511 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:02 vm02 bash[17721]: debug 2026-03-08T23:16:02.395+0000 7f96041f9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-08T23:16:02.511 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:02 vm02 bash[17457]: audit 2026-03-08T23:16:01.242052+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.102:0/411893538' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-08T23:16:02.511 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:02 vm02 bash[17457]: audit 2026-03-08T23:16:01.242052+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.102:0/411893538' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-08T23:16:02.511 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:02 vm02 bash[17457]: cluster 2026-03-08T23:16:01.245981+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: x(active, since 3s) 2026-03-08T23:16:02.511 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:02 vm02 bash[17457]: cluster 2026-03-08T23:16:01.245981+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: x(active, since 3s) 2026-03-08T23:16:02.511 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:02 vm02 bash[17457]: audit 2026-03-08T23:16:01.601459+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.102:0/3909592986' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-08T23:16:02.511 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:02 vm02 bash[17457]: audit 2026-03-08T23:16:01.601459+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.102:0/3909592986' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-08T23:16:02.763 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:02 vm02 bash[17721]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-08T23:16:02.763 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:02 vm02 bash[17721]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-08T23:16:02.763 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:02 vm02 bash[17721]: from numpy import show_config as show_numpy_config 2026-03-08T23:16:02.763 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:02 vm02 bash[17721]: debug 2026-03-08T23:16:02.511+0000 7f96041f9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-08T23:16:02.763 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:02 vm02 bash[17721]: debug 2026-03-08T23:16:02.647+0000 7f96041f9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-08T23:16:02.763 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:02 vm02 bash[17721]: debug 2026-03-08T23:16:02.683+0000 7f96041f9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-08T23:16:02.763 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:02 vm02 bash[17721]: debug 2026-03-08T23:16:02.719+0000 7f96041f9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-08T23:16:03.144 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:02 vm02 bash[17721]: debug 2026-03-08T23:16:02.759+0000 7f96041f9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-08T23:16:03.144 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:02 vm02 bash[17721]: debug 2026-03-08T23:16:02.807+0000 7f96041f9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-08T23:16:03.480 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:03 vm02 bash[17721]: debug 2026-03-08T23:16:03.223+0000 7f96041f9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-08T23:16:03.480 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:03 vm02 bash[17721]: debug 2026-03-08T23:16:03.259+0000 7f96041f9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-08T23:16:03.480 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:03 vm02 bash[17721]: debug 2026-03-08T23:16:03.295+0000 7f96041f9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-08T23:16:03.480 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:03 vm02 bash[17721]: debug 2026-03-08T23:16:03.435+0000 7f96041f9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-08T23:16:03.773 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:03 vm02 bash[17721]: debug 2026-03-08T23:16:03.475+0000 7f96041f9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-08T23:16:03.773 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:03 vm02 bash[17721]: debug 2026-03-08T23:16:03.515+0000 7f96041f9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-08T23:16:03.773 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:03 vm02 bash[17721]: debug 2026-03-08T23:16:03.619+0000 7f96041f9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-08T23:16:04.144 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:03 vm02 bash[17721]: debug 2026-03-08T23:16:03.767+0000 7f96041f9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-08T23:16:04.144 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:03 vm02 bash[17721]: debug 2026-03-08T23:16:03.935+0000 7f96041f9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-08T23:16:04.144 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:03 vm02 bash[17721]: debug 2026-03-08T23:16:03.967+0000 7f96041f9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-08T23:16:04.144 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:04 vm02 bash[17721]: debug 2026-03-08T23:16:04.007+0000 7f96041f9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-08T23:16:04.429 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:04 vm02 bash[17721]: debug 2026-03-08T23:16:04.147+0000 7f96041f9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-08T23:16:04.429 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:04 vm02 bash[17721]: debug 2026-03-08T23:16:04.363+0000 7f96041f9140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: cluster 2026-03-08T23:16:04.370386+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon x restarted 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: cluster 2026-03-08T23:16:04.370386+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon x restarted 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: cluster 2026-03-08T23:16:04.370802+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon x 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: cluster 2026-03-08T23:16:04.370802+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon x 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: cluster 2026-03-08T23:16:04.379722+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: cluster 2026-03-08T23:16:04.379722+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: cluster 2026-03-08T23:16:04.379835+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: x(active, starting, since 0.00911406s) 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: cluster 2026-03-08T23:16:04.379835+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: x(active, starting, since 0.00911406s) 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.382214+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.382214+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.382563+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.382563+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.383388+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.383388+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.383757+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.383757+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.384100+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.384100+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: cluster 2026-03-08T23:16:04.391610+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon x is now available 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: cluster 2026-03-08T23:16:04.391610+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon x is now available 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.400604+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.400604+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.404815+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.404815+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.417252+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.417252+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.417942+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.417942+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.420578+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:04 vm02 bash[17457]: audit 2026-03-08T23:16:04.420578+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:05.423 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-08T23:16:05.423 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-08T23:16:05.423 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-08T23:16:05.423 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-08T23:16:05.423 INFO:teuthology.orchestra.run.vm02.stdout:mgr epoch 5 is available 2026-03-08T23:16:05.423 INFO:teuthology.orchestra.run.vm02.stdout:Setting orchestrator backend to cephadm... 2026-03-08T23:16:05.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:05 vm02 bash[17457]: cephadm 2026-03-08T23:16:04.397718+0000 mgr.x (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-08T23:16:05.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:05 vm02 bash[17457]: cephadm 2026-03-08T23:16:04.397718+0000 mgr.x (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-08T23:16:05.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:05 vm02 bash[17457]: audit 2026-03-08T23:16:04.428821+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:16:05.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:05 vm02 bash[17457]: audit 2026-03-08T23:16:04.428821+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:16:05.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:05 vm02 bash[17457]: audit 2026-03-08T23:16:04.806197+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:05.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:05 vm02 bash[17457]: audit 2026-03-08T23:16:04.806197+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:05.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:05 vm02 bash[17457]: audit 2026-03-08T23:16:04.808849+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:05.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:05 vm02 bash[17457]: audit 2026-03-08T23:16:04.808849+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:05.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:05 vm02 bash[17457]: cluster 2026-03-08T23:16:05.382284+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: x(active, since 1.01157s) 2026-03-08T23:16:05.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:05 vm02 bash[17457]: cluster 2026-03-08T23:16:05.382284+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: x(active, since 1.01157s) 2026-03-08T23:16:05.952 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-08T23:16:05.952 INFO:teuthology.orchestra.run.vm02.stdout:Generating ssh key... 2026-03-08T23:16:06.452 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: Generating public/private ed25519 key pair. 2026-03-08T23:16:06.452 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: Your identification has been saved in /tmp/tmpn1g3hgcw/key 2026-03-08T23:16:06.452 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: Your public key has been saved in /tmp/tmpn1g3hgcw/key.pub 2026-03-08T23:16:06.452 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: The key fingerprint is: 2026-03-08T23:16:06.452 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: SHA256:uEu+h4UhjNukMkNfiREs8+X0iEs8SmB5Zw2iOUTQURk ceph-91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:16:06.452 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: The key's randomart image is: 2026-03-08T23:16:06.452 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: +--[ED25519 256]--+ 2026-03-08T23:16:06.452 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: |+++=E+o | 2026-03-08T23:16:06.452 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: |o=++o= . | 2026-03-08T23:16:06.452 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: |o+*oX + | 2026-03-08T23:16:06.452 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: | ooB+=.o | 2026-03-08T23:16:06.452 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: |o +=+..oS | 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: |+.oo. ... | 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: | + oo | 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: | o... | 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: | +o | 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:06 vm02 bash[17721]: +----[SHA256]-----+ 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: cephadm 2026-03-08T23:16:05.309347+0000 mgr.x (mgr.14118) 2 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Bus STARTING 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: cephadm 2026-03-08T23:16:05.309347+0000 mgr.x (mgr.14118) 2 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Bus STARTING 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:05.382646+0000 mgr.x (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:05.382646+0000 mgr.x (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:05.386491+0000 mgr.x (mgr.14118) 4 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:05.386491+0000 mgr.x (mgr.14118) 4 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: cephadm 2026-03-08T23:16:05.417129+0000 mgr.x (mgr.14118) 5 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Serving on https://192.168.123.102:7150 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: cephadm 2026-03-08T23:16:05.417129+0000 mgr.x (mgr.14118) 5 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Serving on https://192.168.123.102:7150 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: cephadm 2026-03-08T23:16:05.417532+0000 mgr.x (mgr.14118) 6 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Client ('192.168.123.102', 54016) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: cephadm 2026-03-08T23:16:05.417532+0000 mgr.x (mgr.14118) 6 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Client ('192.168.123.102', 54016) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: cephadm 2026-03-08T23:16:05.518143+0000 mgr.x (mgr.14118) 7 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Serving on http://192.168.123.102:8765 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: cephadm 2026-03-08T23:16:05.518143+0000 mgr.x (mgr.14118) 7 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Serving on http://192.168.123.102:8765 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: cephadm 2026-03-08T23:16:05.518207+0000 mgr.x (mgr.14118) 8 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Bus STARTED 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: cephadm 2026-03-08T23:16:05.518207+0000 mgr.x (mgr.14118) 8 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Bus STARTED 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:05.518795+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:05.518795+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:05.663248+0000 mgr.x (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:05.663248+0000 mgr.x (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:05.666329+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:05.666329+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:05.671612+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:05.671612+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:05.920730+0000 mgr.x (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:05.920730+0000 mgr.x (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:06.177524+0000 mgr.x (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:06.177524+0000 mgr.x (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: cephadm 2026-03-08T23:16:06.177716+0000 mgr.x (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: cephadm 2026-03-08T23:16:06.177716+0000 mgr.x (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:06.192962+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:06.192962+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:06.195067+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:06.453 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:06 vm02 bash[17457]: audit 2026-03-08T23:16:06.195067+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:06.488 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC+C9N8gsurqpr9osJDh1ByCQHChwaJWQiVCbZaKin24 ceph-91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:16:06.488 INFO:teuthology.orchestra.run.vm02.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-08T23:16:06.488 INFO:teuthology.orchestra.run.vm02.stdout:Adding key to root@localhost authorized_keys... 2026-03-08T23:16:06.488 INFO:teuthology.orchestra.run.vm02.stdout:Adding host vm02... 2026-03-08T23:16:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:08 vm02 bash[17457]: audit 2026-03-08T23:16:06.443265+0000 mgr.x (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:08 vm02 bash[17457]: audit 2026-03-08T23:16:06.443265+0000 mgr.x (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:08 vm02 bash[17457]: audit 2026-03-08T23:16:06.707915+0000 mgr.x (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm02", "addr": "192.168.123.102", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:08 vm02 bash[17457]: audit 2026-03-08T23:16:06.707915+0000 mgr.x (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm02", "addr": "192.168.123.102", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:08 vm02 bash[17457]: cluster 2026-03-08T23:16:07.205892+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: x(active, since 2s) 2026-03-08T23:16:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:08 vm02 bash[17457]: cluster 2026-03-08T23:16:07.205892+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: x(active, since 2s) 2026-03-08T23:16:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:08 vm02 bash[17457]: cephadm 2026-03-08T23:16:07.235035+0000 mgr.x (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm02 2026-03-08T23:16:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:08 vm02 bash[17457]: cephadm 2026-03-08T23:16:07.235035+0000 mgr.x (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm02 2026-03-08T23:16:08.613 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Added host 'vm02' with addr '192.168.123.102' 2026-03-08T23:16:08.613 INFO:teuthology.orchestra.run.vm02.stdout:Deploying unmanaged mon service... 2026-03-08T23:16:08.928 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-08T23:16:08.929 INFO:teuthology.orchestra.run.vm02.stdout:Deploying unmanaged mgr service... 2026-03-08T23:16:09.203 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: audit 2026-03-08T23:16:08.494507+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: audit 2026-03-08T23:16:08.494507+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: cephadm 2026-03-08T23:16:08.494975+0000 mgr.x (mgr.14118) 16 : cephadm [INF] Added host vm02 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: cephadm 2026-03-08T23:16:08.494975+0000 mgr.x (mgr.14118) 16 : cephadm [INF] Added host vm02 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: audit 2026-03-08T23:16:08.498029+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: audit 2026-03-08T23:16:08.498029+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: audit 2026-03-08T23:16:08.886414+0000 mgr.x (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: audit 2026-03-08T23:16:08.886414+0000 mgr.x (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: cephadm 2026-03-08T23:16:08.887290+0000 mgr.x (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: cephadm 2026-03-08T23:16:08.887290+0000 mgr.x (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: audit 2026-03-08T23:16:08.890107+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: audit 2026-03-08T23:16:08.890107+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: audit 2026-03-08T23:16:09.162231+0000 mgr.x (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: audit 2026-03-08T23:16:09.162231+0000 mgr.x (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: cephadm 2026-03-08T23:16:09.162978+0000 mgr.x (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: cephadm 2026-03-08T23:16:09.162978+0000 mgr.x (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: audit 2026-03-08T23:16:09.168664+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: audit 2026-03-08T23:16:09.168664+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: audit 2026-03-08T23:16:09.440682+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.102:0/416131322' entity='client.admin' 2026-03-08T23:16:09.723 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:09 vm02 bash[17457]: audit 2026-03-08T23:16:09.440682+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.102:0/416131322' entity='client.admin' 2026-03-08T23:16:09.751 INFO:teuthology.orchestra.run.vm02.stdout:Enabling the dashboard module... 2026-03-08T23:16:11.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:10 vm02 bash[17457]: audit 2026-03-08T23:16:09.711520+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.102:0/423425676' entity='client.admin' 2026-03-08T23:16:11.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:10 vm02 bash[17457]: audit 2026-03-08T23:16:09.711520+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.102:0/423425676' entity='client.admin' 2026-03-08T23:16:11.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:10 vm02 bash[17457]: audit 2026-03-08T23:16:10.011374+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:11.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:10 vm02 bash[17457]: audit 2026-03-08T23:16:10.011374+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:11.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:10 vm02 bash[17457]: audit 2026-03-08T23:16:10.074921+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.102:0/3027471765' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-08T23:16:11.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:10 vm02 bash[17457]: audit 2026-03-08T23:16:10.074921+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.102:0/3027471765' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-08T23:16:11.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:10 vm02 bash[17457]: audit 2026-03-08T23:16:10.284522+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:11.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:10 vm02 bash[17457]: audit 2026-03-08T23:16:10.284522+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:11.309 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:11 vm02 bash[17721]: ignoring --setuser ceph since I am not root 2026-03-08T23:16:11.309 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:11 vm02 bash[17721]: ignoring --setgroup ceph since I am not root 2026-03-08T23:16:11.310 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:11 vm02 bash[17721]: debug 2026-03-08T23:16:11.147+0000 7f207484f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-08T23:16:11.310 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:11 vm02 bash[17721]: debug 2026-03-08T23:16:11.187+0000 7f207484f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-08T23:16:11.443 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-08T23:16:11.443 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-08T23:16:11.443 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-08T23:16:11.443 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "active_name": "x", 2026-03-08T23:16:11.443 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-08T23:16:11.443 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-08T23:16:11.443 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for the mgr to restart... 2026-03-08T23:16:11.443 INFO:teuthology.orchestra.run.vm02.stdout:Waiting for mgr epoch 9... 2026-03-08T23:16:11.593 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:11 vm02 bash[17721]: debug 2026-03-08T23:16:11.303+0000 7f207484f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-08T23:16:11.893 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:11 vm02 bash[17721]: debug 2026-03-08T23:16:11.615+0000 7f207484f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-08T23:16:12.266 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:12 vm02 bash[17721]: debug 2026-03-08T23:16:12.055+0000 7f207484f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-08T23:16:12.266 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:12 vm02 bash[17721]: debug 2026-03-08T23:16:12.139+0000 7f207484f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-08T23:16:12.266 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:12 vm02 bash[17721]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-08T23:16:12.266 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:12 vm02 bash[17721]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-08T23:16:12.266 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:12 vm02 bash[17721]: from numpy import show_config as show_numpy_config 2026-03-08T23:16:12.266 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:12 vm02 bash[17457]: audit 2026-03-08T23:16:11.012144+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.102:0/3027471765' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-08T23:16:12.266 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:12 vm02 bash[17457]: audit 2026-03-08T23:16:11.012144+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.102:0/3027471765' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-08T23:16:12.266 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:12 vm02 bash[17457]: cluster 2026-03-08T23:16:11.014727+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: x(active, since 6s) 2026-03-08T23:16:12.266 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:12 vm02 bash[17457]: cluster 2026-03-08T23:16:11.014727+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: x(active, since 6s) 2026-03-08T23:16:12.266 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:12 vm02 bash[17457]: audit 2026-03-08T23:16:11.398240+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.102:0/2390393474' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-08T23:16:12.266 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:12 vm02 bash[17457]: audit 2026-03-08T23:16:11.398240+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.102:0/2390393474' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-08T23:16:12.531 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:12 vm02 bash[17721]: debug 2026-03-08T23:16:12.263+0000 7f207484f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-08T23:16:12.531 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:12 vm02 bash[17721]: debug 2026-03-08T23:16:12.403+0000 7f207484f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-08T23:16:12.531 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:12 vm02 bash[17721]: debug 2026-03-08T23:16:12.443+0000 7f207484f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-08T23:16:12.531 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:12 vm02 bash[17721]: debug 2026-03-08T23:16:12.483+0000 7f207484f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-08T23:16:12.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:12 vm02 bash[17721]: debug 2026-03-08T23:16:12.527+0000 7f207484f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-08T23:16:12.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:12 vm02 bash[17721]: debug 2026-03-08T23:16:12.579+0000 7f207484f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-08T23:16:13.246 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:12 vm02 bash[17721]: debug 2026-03-08T23:16:12.991+0000 7f207484f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-08T23:16:13.246 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:13 vm02 bash[17721]: debug 2026-03-08T23:16:13.027+0000 7f207484f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-08T23:16:13.246 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:13 vm02 bash[17721]: debug 2026-03-08T23:16:13.067+0000 7f207484f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-08T23:16:13.246 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:13 vm02 bash[17721]: debug 2026-03-08T23:16:13.203+0000 7f207484f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-08T23:16:13.538 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:13 vm02 bash[17721]: debug 2026-03-08T23:16:13.243+0000 7f207484f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-08T23:16:13.538 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:13 vm02 bash[17721]: debug 2026-03-08T23:16:13.279+0000 7f207484f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-08T23:16:13.538 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:13 vm02 bash[17721]: debug 2026-03-08T23:16:13.383+0000 7f207484f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-08T23:16:13.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:13 vm02 bash[17721]: debug 2026-03-08T23:16:13.535+0000 7f207484f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-08T23:16:13.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:13 vm02 bash[17721]: debug 2026-03-08T23:16:13.703+0000 7f207484f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-08T23:16:13.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:13 vm02 bash[17721]: debug 2026-03-08T23:16:13.739+0000 7f207484f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-08T23:16:13.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:13 vm02 bash[17721]: debug 2026-03-08T23:16:13.779+0000 7f207484f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-08T23:16:14.205 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:13 vm02 bash[17721]: debug 2026-03-08T23:16:13.935+0000 7f207484f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-08T23:16:14.206 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:16:14 vm02 bash[17721]: debug 2026-03-08T23:16:14.159+0000 7f207484f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: cluster 2026-03-08T23:16:14.165958+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon x restarted 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: cluster 2026-03-08T23:16:14.165958+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon x restarted 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: cluster 2026-03-08T23:16:14.166377+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon x 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: cluster 2026-03-08T23:16:14.166377+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon x 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: cluster 2026-03-08T23:16:14.170336+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: cluster 2026-03-08T23:16:14.170336+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: cluster 2026-03-08T23:16:14.170417+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: x(active, starting, since 0.00414788s) 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: cluster 2026-03-08T23:16:14.170417+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: x(active, starting, since 0.00414788s) 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: audit 2026-03-08T23:16:14.174066+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: audit 2026-03-08T23:16:14.174066+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: audit 2026-03-08T23:16:14.174414+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: audit 2026-03-08T23:16:14.174414+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: audit 2026-03-08T23:16:14.175325+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: audit 2026-03-08T23:16:14.175325+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: audit 2026-03-08T23:16:14.175811+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: audit 2026-03-08T23:16:14.175811+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: audit 2026-03-08T23:16:14.176291+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: audit 2026-03-08T23:16:14.176291+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: cluster 2026-03-08T23:16:14.181988+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon x is now available 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: cluster 2026-03-08T23:16:14.181988+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon x is now available 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: audit 2026-03-08T23:16:14.200612+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: audit 2026-03-08T23:16:14.200612+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: audit 2026-03-08T23:16:14.215100+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:16:14.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:14 vm02 bash[17457]: audit 2026-03-08T23:16:14.215100+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:16:15.238 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout { 2026-03-08T23:16:15.238 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-08T23:16:15.238 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-08T23:16:15.238 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout } 2026-03-08T23:16:15.238 INFO:teuthology.orchestra.run.vm02.stdout:mgr epoch 9 is available 2026-03-08T23:16:15.238 INFO:teuthology.orchestra.run.vm02.stdout:Generating a dashboard self-signed certificate... 2026-03-08T23:16:15.619 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:15 vm02 bash[17457]: audit 2026-03-08T23:16:14.228436+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:16:15.619 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:15 vm02 bash[17457]: audit 2026-03-08T23:16:14.228436+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:16:15.619 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:15 vm02 bash[17457]: cluster 2026-03-08T23:16:15.173853+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: x(active, since 1.00758s) 2026-03-08T23:16:15.619 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:15 vm02 bash[17457]: cluster 2026-03-08T23:16:15.173853+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: x(active, since 1.00758s) 2026-03-08T23:16:15.661 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-08T23:16:15.661 INFO:teuthology.orchestra.run.vm02.stdout:Creating initial admin user... 2026-03-08T23:16:16.153 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$CsrDoCN89Z8NDz0nFGImYeEZ//snCtxoik9PjQHezAyG.ZjNfAKea", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773011776, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-08T23:16:16.153 INFO:teuthology.orchestra.run.vm02.stdout:Fetching dashboard port number... 2026-03-08T23:16:16.554 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stdout 8443 2026-03-08T23:16:16.554 INFO:teuthology.orchestra.run.vm02.stdout:firewalld does not appear to be present 2026-03-08T23:16:16.554 INFO:teuthology.orchestra.run.vm02.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-08T23:16:16.556 INFO:teuthology.orchestra.run.vm02.stdout:Ceph Dashboard is now available at: 2026-03-08T23:16:16.556 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:16:16.556 INFO:teuthology.orchestra.run.vm02.stdout: URL: https://vm02.local:8443/ 2026-03-08T23:16:16.556 INFO:teuthology.orchestra.run.vm02.stdout: User: admin 2026-03-08T23:16:16.556 INFO:teuthology.orchestra.run.vm02.stdout: Password: vki0lnxwz0 2026-03-08T23:16:16.556 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:16:16.556 INFO:teuthology.orchestra.run.vm02.stdout:Saving cluster configuration to /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config directory 2026-03-08T23:16:16.854 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: cephadm 2026-03-08T23:16:15.543714+0000 mgr.x (mgr.14150) 4 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Bus STARTING 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: cephadm 2026-03-08T23:16:15.543714+0000 mgr.x (mgr.14150) 4 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Bus STARTING 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: audit 2026-03-08T23:16:15.608180+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: audit 2026-03-08T23:16:15.608180+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: audit 2026-03-08T23:16:15.610229+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: audit 2026-03-08T23:16:15.610229+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: cephadm 2026-03-08T23:16:15.644947+0000 mgr.x (mgr.14150) 5 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Serving on http://192.168.123.102:8765 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: cephadm 2026-03-08T23:16:15.644947+0000 mgr.x (mgr.14150) 5 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Serving on http://192.168.123.102:8765 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: cephadm 2026-03-08T23:16:15.753932+0000 mgr.x (mgr.14150) 6 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Serving on https://192.168.123.102:7150 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: cephadm 2026-03-08T23:16:15.753932+0000 mgr.x (mgr.14150) 6 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Serving on https://192.168.123.102:7150 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: cephadm 2026-03-08T23:16:15.753976+0000 mgr.x (mgr.14150) 7 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Bus STARTED 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: cephadm 2026-03-08T23:16:15.753976+0000 mgr.x (mgr.14150) 7 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Bus STARTED 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: cephadm 2026-03-08T23:16:15.754195+0000 mgr.x (mgr.14150) 8 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Client ('192.168.123.102', 40756) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: cephadm 2026-03-08T23:16:15.754195+0000 mgr.x (mgr.14150) 8 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Client ('192.168.123.102', 40756) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: audit 2026-03-08T23:16:15.927223+0000 mgr.x (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: audit 2026-03-08T23:16:15.927223+0000 mgr.x (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: audit 2026-03-08T23:16:16.086704+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: audit 2026-03-08T23:16:16.086704+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: audit 2026-03-08T23:16:16.489579+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.102:0/2933246231' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-08T23:16:16.855 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:16 vm02 bash[17457]: audit 2026-03-08T23:16:16.489579+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.102:0/2933246231' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout:Or, if you are only running a single cluster on this host: 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout: ceph telemetry on 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout:For more information see: 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:16:16.881 INFO:teuthology.orchestra.run.vm02.stdout:Bootstrap complete. 2026-03-08T23:16:16.902 INFO:tasks.cephadm:Fetching config... 2026-03-08T23:16:16.902 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-08T23:16:16.902 DEBUG:teuthology.orchestra.run.vm02:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-08T23:16:16.904 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-08T23:16:16.904 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-08T23:16:16.904 DEBUG:teuthology.orchestra.run.vm02:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-08T23:16:16.948 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-08T23:16:16.948 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-08T23:16:16.948 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/keyring of=/dev/stdout 2026-03-08T23:16:16.996 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-08T23:16:16.996 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-08T23:16:16.996 DEBUG:teuthology.orchestra.run.vm02:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-08T23:16:17.040 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-08T23:16:17.040 DEBUG:teuthology.orchestra.run.vm02:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC+C9N8gsurqpr9osJDh1ByCQHChwaJWQiVCbZaKin24 ceph-91105a84-1b44-11f1-9a43-e95894f13987' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-08T23:16:17.098 INFO:teuthology.orchestra.run.vm02.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC+C9N8gsurqpr9osJDh1ByCQHChwaJWQiVCbZaKin24 ceph-91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:16:17.104 DEBUG:teuthology.orchestra.run.vm04:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC+C9N8gsurqpr9osJDh1ByCQHChwaJWQiVCbZaKin24 ceph-91105a84-1b44-11f1-9a43-e95894f13987' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-08T23:16:17.116 INFO:teuthology.orchestra.run.vm04.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC+C9N8gsurqpr9osJDh1ByCQHChwaJWQiVCbZaKin24 ceph-91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:16:17.121 DEBUG:teuthology.orchestra.run.vm10:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC+C9N8gsurqpr9osJDh1ByCQHChwaJWQiVCbZaKin24 ceph-91105a84-1b44-11f1-9a43-e95894f13987' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-08T23:16:17.133 INFO:teuthology.orchestra.run.vm10.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC+C9N8gsurqpr9osJDh1ByCQHChwaJWQiVCbZaKin24 ceph-91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:16:17.138 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-08T23:16:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:17 vm02 bash[17457]: audit 2026-03-08T23:16:16.845350+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.102:0/103063715' entity='client.admin' 2026-03-08T23:16:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:17 vm02 bash[17457]: audit 2026-03-08T23:16:16.845350+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.102:0/103063715' entity='client.admin' 2026-03-08T23:16:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:17 vm02 bash[17457]: cluster 2026-03-08T23:16:17.091035+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e12: x(active, since 2s) 2026-03-08T23:16:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:17 vm02 bash[17457]: cluster 2026-03-08T23:16:17.091035+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e12: x(active, since 2s) 2026-03-08T23:16:19.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:19 vm02 bash[17457]: audit 2026-03-08T23:16:18.213018+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:19.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:19 vm02 bash[17457]: audit 2026-03-08T23:16:18.213018+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:19.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:19 vm02 bash[17457]: audit 2026-03-08T23:16:18.756836+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:19.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:19 vm02 bash[17457]: audit 2026-03-08T23:16:18.756836+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:21.067 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:16:21.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:21 vm02 bash[17457]: cluster 2026-03-08T23:16:20.217819+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: x(active, since 6s) 2026-03-08T23:16:21.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:21 vm02 bash[17457]: cluster 2026-03-08T23:16:20.217819+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: x(active, since 6s) 2026-03-08T23:16:21.376 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-08T23:16:21.376 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-08T23:16:22.393 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:22 vm02 bash[17457]: audit 2026-03-08T23:16:21.324821+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.102:0/1825097943' entity='client.admin' 2026-03-08T23:16:22.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:22 vm02 bash[17457]: audit 2026-03-08T23:16:21.324821+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.102:0/1825097943' entity='client.admin' 2026-03-08T23:16:25.077 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:16:25.402 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm04 2026-03-08T23:16:25.402 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-08T23:16:25.402 DEBUG:teuthology.orchestra.run.vm04:> dd of=/etc/ceph/ceph.conf 2026-03-08T23:16:25.405 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-08T23:16:25.405 DEBUG:teuthology.orchestra.run.vm04:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:16:25.451 INFO:tasks.cephadm:Adding host vm04 to orchestrator... 2026-03-08T23:16:25.451 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph orch host add vm04 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:24.526626+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:24.526626+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:24.529224+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:24.529224+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:24.529743+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:24.529743+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:24.531918+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:24.531918+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:24.536445+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:24.536445+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:24.538602+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:24.538602+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:25.324252+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:25.324252+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:25.324739+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:25.324739+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:25.325498+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:25.325498+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:25.325839+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:25.325839+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:25.449379+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:25.449379+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:25.451837+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:25.451837+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:25.454570+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:25 vm02 bash[17457]: audit 2026-03-08T23:16:25.454570+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:26.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:26 vm02 bash[17457]: audit 2026-03-08T23:16:25.321700+0000 mgr.x (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:26.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:26 vm02 bash[17457]: audit 2026-03-08T23:16:25.321700+0000 mgr.x (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:26.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:26 vm02 bash[17457]: cephadm 2026-03-08T23:16:25.326321+0000 mgr.x (mgr.14150) 11 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-08T23:16:26.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:26 vm02 bash[17457]: cephadm 2026-03-08T23:16:25.326321+0000 mgr.x (mgr.14150) 11 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-08T23:16:26.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:26 vm02 bash[17457]: cephadm 2026-03-08T23:16:25.356704+0000 mgr.x (mgr.14150) 12 : cephadm [INF] Updating vm02:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:16:26.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:26 vm02 bash[17457]: cephadm 2026-03-08T23:16:25.356704+0000 mgr.x (mgr.14150) 12 : cephadm [INF] Updating vm02:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:16:26.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:26 vm02 bash[17457]: cephadm 2026-03-08T23:16:25.394153+0000 mgr.x (mgr.14150) 13 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:16:26.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:26 vm02 bash[17457]: cephadm 2026-03-08T23:16:25.394153+0000 mgr.x (mgr.14150) 13 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:16:26.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:26 vm02 bash[17457]: cephadm 2026-03-08T23:16:25.420954+0000 mgr.x (mgr.14150) 14 : cephadm [INF] Updating vm02:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.client.admin.keyring 2026-03-08T23:16:26.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:26 vm02 bash[17457]: cephadm 2026-03-08T23:16:25.420954+0000 mgr.x (mgr.14150) 14 : cephadm [INF] Updating vm02:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.client.admin.keyring 2026-03-08T23:16:29.085 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:16:30.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:30 vm02 bash[17457]: audit 2026-03-08T23:16:29.390707+0000 mgr.x (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:30.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:30 vm02 bash[17457]: audit 2026-03-08T23:16:29.390707+0000 mgr.x (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:30.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:30 vm02 bash[17457]: cephadm 2026-03-08T23:16:29.923666+0000 mgr.x (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-08T23:16:30.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:30 vm02 bash[17457]: cephadm 2026-03-08T23:16:29.923666+0000 mgr.x (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-08T23:16:31.165 INFO:teuthology.orchestra.run.vm02.stdout:Added host 'vm04' with addr '192.168.123.104' 2026-03-08T23:16:31.220 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph orch host ls --format=json 2026-03-08T23:16:32.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:32 vm02 bash[17457]: audit 2026-03-08T23:16:31.164817+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:32.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:32 vm02 bash[17457]: audit 2026-03-08T23:16:31.164817+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:32.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:32 vm02 bash[17457]: cephadm 2026-03-08T23:16:31.165226+0000 mgr.x (mgr.14150) 17 : cephadm [INF] Added host vm04 2026-03-08T23:16:32.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:32 vm02 bash[17457]: cephadm 2026-03-08T23:16:31.165226+0000 mgr.x (mgr.14150) 17 : cephadm [INF] Added host vm04 2026-03-08T23:16:32.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:32 vm02 bash[17457]: audit 2026-03-08T23:16:31.165494+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:32.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:32 vm02 bash[17457]: audit 2026-03-08T23:16:31.165494+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:32.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:32 vm02 bash[17457]: audit 2026-03-08T23:16:31.447894+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:32.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:32 vm02 bash[17457]: audit 2026-03-08T23:16:31.447894+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:34.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:33 vm02 bash[17457]: audit 2026-03-08T23:16:32.723568+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:34.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:33 vm02 bash[17457]: audit 2026-03-08T23:16:32.723568+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:34.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:33 vm02 bash[17457]: audit 2026-03-08T23:16:33.246650+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:34.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:33 vm02 bash[17457]: audit 2026-03-08T23:16:33.246650+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:35.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:35 vm02 bash[17457]: cluster 2026-03-08T23:16:34.176227+0000 mgr.x (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:35.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:35 vm02 bash[17457]: cluster 2026-03-08T23:16:34.176227+0000 mgr.x (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:35.828 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:16:36.084 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:16:36.084 INFO:teuthology.orchestra.run.vm02.stdout:[{"addr": "192.168.123.102", "hostname": "vm02", "labels": [], "status": ""}, {"addr": "192.168.123.104", "hostname": "vm04", "labels": [], "status": ""}] 2026-03-08T23:16:36.148 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm10 2026-03-08T23:16:36.148 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-08T23:16:36.148 DEBUG:teuthology.orchestra.run.vm10:> dd of=/etc/ceph/ceph.conf 2026-03-08T23:16:36.152 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-08T23:16:36.152 DEBUG:teuthology.orchestra.run.vm10:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:16:36.196 INFO:tasks.cephadm:Adding host vm10 to orchestrator... 2026-03-08T23:16:36.196 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph orch host add vm10 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:36 vm02 bash[17457]: audit 2026-03-08T23:16:35.998706+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:36 vm02 bash[17457]: audit 2026-03-08T23:16:35.998706+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:36 vm02 bash[17457]: audit 2026-03-08T23:16:36.000944+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:36 vm02 bash[17457]: audit 2026-03-08T23:16:36.000944+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:36 vm02 bash[17457]: audit 2026-03-08T23:16:36.003931+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:36 vm02 bash[17457]: audit 2026-03-08T23:16:36.003931+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:36 vm02 bash[17457]: audit 2026-03-08T23:16:36.005868+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:36 vm02 bash[17457]: audit 2026-03-08T23:16:36.005868+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:36 vm02 bash[17457]: audit 2026-03-08T23:16:36.006522+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:36 vm02 bash[17457]: audit 2026-03-08T23:16:36.006522+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:36 vm02 bash[17457]: audit 2026-03-08T23:16:36.007134+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:36 vm02 bash[17457]: audit 2026-03-08T23:16:36.007134+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:37 vm02 bash[17457]: audit 2026-03-08T23:16:36.007520+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:37 vm02 bash[17457]: audit 2026-03-08T23:16:36.007520+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:37 vm02 bash[17457]: cephadm 2026-03-08T23:16:36.008065+0000 mgr.x (mgr.14150) 19 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:37 vm02 bash[17457]: cephadm 2026-03-08T23:16:36.008065+0000 mgr.x (mgr.14150) 19 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:37 vm02 bash[17457]: cephadm 2026-03-08T23:16:36.042393+0000 mgr.x (mgr.14150) 20 : cephadm [INF] Updating vm04:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:37 vm02 bash[17457]: cephadm 2026-03-08T23:16:36.042393+0000 mgr.x (mgr.14150) 20 : cephadm [INF] Updating vm04:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:37 vm02 bash[17457]: audit 2026-03-08T23:16:36.153936+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:37 vm02 bash[17457]: audit 2026-03-08T23:16:36.153936+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:37 vm02 bash[17457]: audit 2026-03-08T23:16:36.156517+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:37 vm02 bash[17457]: audit 2026-03-08T23:16:36.156517+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:37 vm02 bash[17457]: audit 2026-03-08T23:16:36.158932+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:37.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:37 vm02 bash[17457]: audit 2026-03-08T23:16:36.158932+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:38.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:38 vm02 bash[17457]: cephadm 2026-03-08T23:16:36.073195+0000 mgr.x (mgr.14150) 21 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:16:38.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:38 vm02 bash[17457]: cephadm 2026-03-08T23:16:36.073195+0000 mgr.x (mgr.14150) 21 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:16:38.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:38 vm02 bash[17457]: audit 2026-03-08T23:16:36.084812+0000 mgr.x (mgr.14150) 22 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:16:38.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:38 vm02 bash[17457]: audit 2026-03-08T23:16:36.084812+0000 mgr.x (mgr.14150) 22 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:16:38.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:38 vm02 bash[17457]: cephadm 2026-03-08T23:16:36.110312+0000 mgr.x (mgr.14150) 23 : cephadm [INF] Updating vm04:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.client.admin.keyring 2026-03-08T23:16:38.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:38 vm02 bash[17457]: cephadm 2026-03-08T23:16:36.110312+0000 mgr.x (mgr.14150) 23 : cephadm [INF] Updating vm04:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.client.admin.keyring 2026-03-08T23:16:38.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:38 vm02 bash[17457]: cluster 2026-03-08T23:16:36.176415+0000 mgr.x (mgr.14150) 24 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:38.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:38 vm02 bash[17457]: cluster 2026-03-08T23:16:36.176415+0000 mgr.x (mgr.14150) 24 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:39.838 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:16:40.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:40 vm02 bash[17457]: cluster 2026-03-08T23:16:38.176609+0000 mgr.x (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:40.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:40 vm02 bash[17457]: cluster 2026-03-08T23:16:38.176609+0000 mgr.x (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:41.975 INFO:teuthology.orchestra.run.vm02.stdout:Added host 'vm10' with addr '192.168.123.110' 2026-03-08T23:16:42.028 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph orch host ls --format=json 2026-03-08T23:16:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:42 vm02 bash[17457]: audit 2026-03-08T23:16:40.097104+0000 mgr.x (mgr.14150) 26 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm10", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:42 vm02 bash[17457]: audit 2026-03-08T23:16:40.097104+0000 mgr.x (mgr.14150) 26 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm10", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:42 vm02 bash[17457]: cluster 2026-03-08T23:16:40.176756+0000 mgr.x (mgr.14150) 27 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:42 vm02 bash[17457]: cluster 2026-03-08T23:16:40.176756+0000 mgr.x (mgr.14150) 27 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:42 vm02 bash[17457]: cephadm 2026-03-08T23:16:40.667881+0000 mgr.x (mgr.14150) 28 : cephadm [INF] Deploying cephadm binary to vm10 2026-03-08T23:16:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:42 vm02 bash[17457]: cephadm 2026-03-08T23:16:40.667881+0000 mgr.x (mgr.14150) 28 : cephadm [INF] Deploying cephadm binary to vm10 2026-03-08T23:16:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:42 vm02 bash[17457]: audit 2026-03-08T23:16:41.975064+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:42 vm02 bash[17457]: audit 2026-03-08T23:16:41.975064+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:42 vm02 bash[17457]: audit 2026-03-08T23:16:41.975797+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:42 vm02 bash[17457]: audit 2026-03-08T23:16:41.975797+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:43.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:43 vm02 bash[17457]: cephadm 2026-03-08T23:16:41.975449+0000 mgr.x (mgr.14150) 29 : cephadm [INF] Added host vm10 2026-03-08T23:16:43.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:43 vm02 bash[17457]: cephadm 2026-03-08T23:16:41.975449+0000 mgr.x (mgr.14150) 29 : cephadm [INF] Added host vm10 2026-03-08T23:16:43.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:43 vm02 bash[17457]: audit 2026-03-08T23:16:42.266437+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:43.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:43 vm02 bash[17457]: audit 2026-03-08T23:16:42.266437+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:44 vm02 bash[17457]: cluster 2026-03-08T23:16:42.176932+0000 mgr.x (mgr.14150) 30 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:44 vm02 bash[17457]: cluster 2026-03-08T23:16:42.176932+0000 mgr.x (mgr.14150) 30 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:44 vm02 bash[17457]: audit 2026-03-08T23:16:43.550780+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:44 vm02 bash[17457]: audit 2026-03-08T23:16:43.550780+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:45.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:45 vm02 bash[17457]: audit 2026-03-08T23:16:44.125037+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:45.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:45 vm02 bash[17457]: audit 2026-03-08T23:16:44.125037+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:45.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:45 vm02 bash[17457]: cluster 2026-03-08T23:16:44.177085+0000 mgr.x (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:45.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:45 vm02 bash[17457]: cluster 2026-03-08T23:16:44.177085+0000 mgr.x (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:46.643 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:16:46.928 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:16:46.929 INFO:teuthology.orchestra.run.vm02.stdout:[{"addr": "192.168.123.102", "hostname": "vm02", "labels": [], "status": ""}, {"addr": "192.168.123.104", "hostname": "vm04", "labels": [], "status": ""}, {"addr": "192.168.123.110", "hostname": "vm10", "labels": [], "status": ""}] 2026-03-08T23:16:46.985 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-08T23:16:46.985 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph osd crush tunables default 2026-03-08T23:16:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: cluster 2026-03-08T23:16:46.177247+0000 mgr.x (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: cluster 2026-03-08T23:16:46.177247+0000 mgr.x (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: audit 2026-03-08T23:16:46.929017+0000 mgr.x (mgr.14150) 33 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:16:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: audit 2026-03-08T23:16:46.929017+0000 mgr.x (mgr.14150) 33 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:16:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: audit 2026-03-08T23:16:47.110757+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: audit 2026-03-08T23:16:47.110757+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: audit 2026-03-08T23:16:47.112598+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: audit 2026-03-08T23:16:47.112598+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: audit 2026-03-08T23:16:47.114966+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: audit 2026-03-08T23:16:47.114966+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: audit 2026-03-08T23:16:47.116655+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: audit 2026-03-08T23:16:47.116655+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: audit 2026-03-08T23:16:47.117104+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:16:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: audit 2026-03-08T23:16:47.117104+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:16:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: audit 2026-03-08T23:16:47.117682+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: audit 2026-03-08T23:16:47.117682+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:47.395 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: audit 2026-03-08T23:16:47.118077+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:16:47.395 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:47 vm02 bash[17457]: audit 2026-03-08T23:16:47.118077+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:16:48.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:48 vm02 bash[17457]: cephadm 2026-03-08T23:16:47.118651+0000 mgr.x (mgr.14150) 34 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-08T23:16:48.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:48 vm02 bash[17457]: cephadm 2026-03-08T23:16:47.118651+0000 mgr.x (mgr.14150) 34 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-08T23:16:48.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:48 vm02 bash[17457]: cephadm 2026-03-08T23:16:47.153607+0000 mgr.x (mgr.14150) 35 : cephadm [INF] Updating vm10:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:16:48.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:48 vm02 bash[17457]: cephadm 2026-03-08T23:16:47.153607+0000 mgr.x (mgr.14150) 35 : cephadm [INF] Updating vm10:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:16:48.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:48 vm02 bash[17457]: cephadm 2026-03-08T23:16:47.182647+0000 mgr.x (mgr.14150) 36 : cephadm [INF] Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:16:48.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:48 vm02 bash[17457]: cephadm 2026-03-08T23:16:47.182647+0000 mgr.x (mgr.14150) 36 : cephadm [INF] Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:16:48.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:48 vm02 bash[17457]: cephadm 2026-03-08T23:16:47.216316+0000 mgr.x (mgr.14150) 37 : cephadm [INF] Updating vm10:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.client.admin.keyring 2026-03-08T23:16:48.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:48 vm02 bash[17457]: cephadm 2026-03-08T23:16:47.216316+0000 mgr.x (mgr.14150) 37 : cephadm [INF] Updating vm10:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.client.admin.keyring 2026-03-08T23:16:48.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:48 vm02 bash[17457]: audit 2026-03-08T23:16:47.252801+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:48.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:48 vm02 bash[17457]: audit 2026-03-08T23:16:47.252801+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:48.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:48 vm02 bash[17457]: audit 2026-03-08T23:16:47.255706+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:48.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:48 vm02 bash[17457]: audit 2026-03-08T23:16:47.255706+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:48.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:48 vm02 bash[17457]: audit 2026-03-08T23:16:47.258701+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:48.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:48 vm02 bash[17457]: audit 2026-03-08T23:16:47.258701+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:49.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:49 vm02 bash[17457]: cluster 2026-03-08T23:16:48.177427+0000 mgr.x (mgr.14150) 38 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:49.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:49 vm02 bash[17457]: cluster 2026-03-08T23:16:48.177427+0000 mgr.x (mgr.14150) 38 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:50.653 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:16:51.269 INFO:teuthology.orchestra.run.vm02.stderr:adjusted tunables profile to default 2026-03-08T23:16:51.325 INFO:tasks.cephadm:Adding mon.a on vm02 2026-03-08T23:16:51.325 INFO:tasks.cephadm:Adding mon.b on vm04 2026-03-08T23:16:51.325 INFO:tasks.cephadm:Adding mon.c on vm10 2026-03-08T23:16:51.325 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph orch apply mon '3;vm02:192.168.123.102=a;vm04:192.168.123.104=b;vm10:192.168.123.110=c' 2026-03-08T23:16:51.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:51 vm02 bash[17457]: cluster 2026-03-08T23:16:50.177591+0000 mgr.x (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:51.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:51 vm02 bash[17457]: cluster 2026-03-08T23:16:50.177591+0000 mgr.x (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:51.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:51 vm02 bash[17457]: audit 2026-03-08T23:16:50.903928+0000 mon.a (mon.0) 140 : audit [INF] from='client.? 192.168.123.102:0/1938167508' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-08T23:16:51.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:51 vm02 bash[17457]: audit 2026-03-08T23:16:50.903928+0000 mon.a (mon.0) 140 : audit [INF] from='client.? 192.168.123.102:0/1938167508' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-08T23:16:52.436 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:16:52.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:52 vm02 bash[17457]: audit 2026-03-08T23:16:51.270066+0000 mon.a (mon.0) 141 : audit [INF] from='client.? 192.168.123.102:0/1938167508' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-08T23:16:52.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:52 vm02 bash[17457]: audit 2026-03-08T23:16:51.270066+0000 mon.a (mon.0) 141 : audit [INF] from='client.? 192.168.123.102:0/1938167508' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-08T23:16:52.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:52 vm02 bash[17457]: cluster 2026-03-08T23:16:51.271810+0000 mon.a (mon.0) 142 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:16:52.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:52 vm02 bash[17457]: cluster 2026-03-08T23:16:51.271810+0000 mon.a (mon.0) 142 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:16:52.696 INFO:teuthology.orchestra.run.vm10.stdout:Scheduled mon update... 2026-03-08T23:16:52.804 DEBUG:teuthology.orchestra.run.vm04:mon.b> sudo journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.b.service 2026-03-08T23:16:52.805 DEBUG:teuthology.orchestra.run.vm10:mon.c> sudo journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.c.service 2026-03-08T23:16:52.806 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-08T23:16:52.806 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph mon dump -f json 2026-03-08T23:16:53.984 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.c/config 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: cluster 2026-03-08T23:16:52.177791+0000 mgr.x (mgr.14150) 40 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: cluster 2026-03-08T23:16:52.177791+0000 mgr.x (mgr.14150) 40 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: audit 2026-03-08T23:16:52.691821+0000 mgr.x (mgr.14150) 41 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm02:192.168.123.102=a;vm04:192.168.123.104=b;vm10:192.168.123.110=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: audit 2026-03-08T23:16:52.691821+0000 mgr.x (mgr.14150) 41 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm02:192.168.123.102=a;vm04:192.168.123.104=b;vm10:192.168.123.110=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: cephadm 2026-03-08T23:16:52.692857+0000 mgr.x (mgr.14150) 42 : cephadm [INF] Saving service mon spec with placement vm02:192.168.123.102=a;vm04:192.168.123.104=b;vm10:192.168.123.110=c;count:3 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: cephadm 2026-03-08T23:16:52.692857+0000 mgr.x (mgr.14150) 42 : cephadm [INF] Saving service mon spec with placement vm02:192.168.123.102=a;vm04:192.168.123.104=b;vm10:192.168.123.110=c;count:3 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: audit 2026-03-08T23:16:52.696311+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: audit 2026-03-08T23:16:52.696311+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: audit 2026-03-08T23:16:52.697334+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: audit 2026-03-08T23:16:52.697334+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: audit 2026-03-08T23:16:52.698540+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: audit 2026-03-08T23:16:52.698540+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: audit 2026-03-08T23:16:52.699190+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: audit 2026-03-08T23:16:52.699190+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: audit 2026-03-08T23:16:52.702139+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: audit 2026-03-08T23:16:52.702139+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: audit 2026-03-08T23:16:52.703431+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: audit 2026-03-08T23:16:52.703431+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: audit 2026-03-08T23:16:52.704083+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: audit 2026-03-08T23:16:52.704083+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: cephadm 2026-03-08T23:16:52.704749+0000 mgr.x (mgr.14150) 43 : cephadm [INF] Deploying daemon mon.c on vm10 2026-03-08T23:16:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:53 vm02 bash[17457]: cephadm 2026-03-08T23:16:52.704749+0000 mgr.x (mgr.14150) 43 : cephadm [INF] Deploying daemon mon.c on vm10 2026-03-08T23:16:54.240 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 systemd[1]: Started Ceph mon.c for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.404+0000 7f92f3118d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.404+0000 7f92f3118d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.404+0000 7f92f3118d80 0 pidfile_write: ignore empty --pid-file 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 0 load: jerasure load: lrc 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Git sha 0 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: DB SUMMARY 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: DB Session ID: KOODN1P6WVDS3KX6NQIL 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: CURRENT file: CURRENT 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-c/store.db dir, Total Num: 0, files: 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-c/store.db: 000004.log size: 511 ; 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.error_if_exists: 0 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.create_if_missing: 0 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.env: 0x5614adcd9dc0 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.info_log: 0x5614d0b71880 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.statistics: (nil) 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.use_fsync: 0 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-08T23:16:54.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.db_log_dir: 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.wal_dir: 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.write_buffer_manager: 0x5614d0b75900 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.unordered_write: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.row_cache: None 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.wal_filter: None 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.two_write_queues: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.wal_compression: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.atomic_flush: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-08T23:16:54.659 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.max_open_files: -1 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Compression algorithms supported: 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: kZSTD supported: 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: kXpressCompression supported: 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: kZlibCompression supported: 1 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.408+0000 7f92f3118d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.merge_operator: 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compaction_filter: None 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d0b70480) 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cache_index_and_filter_blocks: 1 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: pin_top_level_index_and_filter: 1 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: index_type: 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: data_block_index_type: 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: index_shortening: 1 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: data_block_hash_table_util_ratio: 0.750000 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: checksum: 4 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: no_block_cache: 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: block_cache: 0x5614d0b97350 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: block_cache_name: BinnedLRUCache 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: block_cache_options: 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: capacity : 536870912 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: num_shard_bits : 4 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: strict_capacity_limit : 0 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: high_pri_pool_ratio: 0.000 2026-03-08T23:16:54.660 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: block_cache_compressed: (nil) 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: persistent_cache: (nil) 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: block_size: 4096 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: block_size_deviation: 10 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: block_restart_interval: 16 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: index_block_restart_interval: 1 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: metadata_block_size: 4096 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: partition_filters: 0 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: use_delta_encoding: 1 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: filter_policy: bloomfilter 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: whole_key_filtering: 1 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: verify_compression: 0 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: read_amp_bytes_per_bit: 0 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: format_version: 5 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: enable_index_compression: 1 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: block_align: 0 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: max_auto_readahead_size: 262144 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: prepopulate_block_cache: 0 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: initial_auto_readahead_size: 8192 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: num_file_reads_for_auto_readahead: 2 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compression: NoCompression 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.num_levels: 7 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-08T23:16:54.661 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.bloom_locality: 0 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-08T23:16:54.662 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.ttl: 2592000 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.enable_blob_files: false 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.min_blob_size: 0 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.412+0000 7f92f3118d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.416+0000 7f92f3118d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.416+0000 7f92f3118d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.420+0000 7f92f3118d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7829cf02-c7f6-40f4-bb15-df8c86c38b60 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.420+0000 7f92f3118d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773011814423234, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.420+0000 7f92f3118d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.420+0000 7f92f3118d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773011814424385, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773011814, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7829cf02-c7f6-40f4-bb15-df8c86c38b60", "db_session_id": "KOODN1P6WVDS3KX6NQIL", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.420+0000 7f92f3118d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773011814424742, "job": 1, "event": "recovery_finished"} 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.420+0000 7f92f3118d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.424+0000 7f92f3118d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-c/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.424+0000 7f92f3118d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5614d0b98e00 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.424+0000 7f92f3118d80 4 rocksdb: DB pointer 0x5614d0ca4000 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.424+0000 7f92f3118d80 0 mon.c does not exist in monmap, will attempt to join an existing cluster 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.424+0000 7f92f3118d80 0 using public_addr v2:192.168.123.110:0/0 -> [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.424+0000 7f92f3118d80 0 starting mon.c rank -1 at public addrs [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] at bind addrs [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon_data /var/lib/ceph/mon/ceph-c fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.428+0000 7f92f3118d80 1 mon.c@-1(???) e0 preinit fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.428+0000 7f92e8ee2640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.428+0000 7f92e8ee2640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: ** DB Stats ** 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: ** Compaction Stats [default] ** 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.7 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.7 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.7 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: ** Compaction Stats [default] ** 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.7 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: AddFile(Total Files): cumulative 0, interval 0 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: AddFile(Keys): cumulative 0, interval 0 2026-03-08T23:16:54.663 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Block cache BinnedLRUCache@0x5614d0b97350#7 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5e-06 secs_since: 0 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: ** File Read Latency Histogram By Level [default] ** 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.448+0000 7f92ebee8640 0 mon.c@-1(synchronizing).mds e1 new map 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.448+0000 7f92ebee8640 0 mon.c@-1(synchronizing).mds e1 print_map 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: e1 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: btime 2026-03-08T23:15:53:503174+0000 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: legacy client fscid: -1 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: No filesystems configured 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.448+0000 7f92ebee8640 1 mon.c@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.448+0000 7f92ebee8640 1 mon.c@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.448+0000 7f92ebee8640 1 mon.c@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.448+0000 7f92ebee8640 1 mon.c@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.448+0000 7f92ebee8640 1 mon.c@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.448+0000 7f92ebee8640 1 mon.c@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.452+0000 7f92ebee8640 0 mon.c@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.452+0000 7f92ebee8640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.452+0000 7f92ebee8640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.452+0000 7f92ebee8640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:53.503738+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:53.503738+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:53.493027+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:53.493027+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462527+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462527+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462554+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462554+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462561+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462561+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462565+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-08T23:15:51.971315+0000 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462565+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-08T23:15:51.971315+0000 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462581+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462581+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:16:54.664 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462585+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462585+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462588+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462588+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462592+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462592+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462854+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462854+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462868+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.462868+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.473862+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:54.473862+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:54.531390+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.102:0/1703890835' entity='client.admin' 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:54.531390+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.102:0/1703890835' entity='client.admin' 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:55.190893+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.102:0/372855996' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:55.190893+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.102:0/372855996' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:57.471512+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.102:0/3091593712' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:57.471512+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.102:0/3091593712' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:58.206276+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon x 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:58.206276+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon x 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:58.211069+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: x(active, starting, since 0.00485359s) 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:58.211069+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: x(active, starting, since 0.00485359s) 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.213544+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.213544+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.213853+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.213853+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.214153+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.214153+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.214425+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.214425+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.214718+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.214718+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:58.219971+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon x is now available 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:58.219971+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon x is now available 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.228824+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.228824+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.229856+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.229856+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.232747+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.232747+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.233539+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.233539+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.235912+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:58.235912+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.102:0/2108429073' entity='mgr.x' 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:59.214178+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: x(active, since 1.00797s) 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:15:59.214178+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: x(active, since 1.00797s) 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:59.841562+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.102:0/3138267933' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:15:59.841562+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.102:0/3138267933' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:00.202776+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.102:0/3900145137' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:00.202776+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.102:0/3900145137' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:00.235500+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: x(active, since 2s) 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:00.235500+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: x(active, since 2s) 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:00.546156+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.102:0/411893538' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:00.546156+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.102:0/411893538' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:01.242052+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.102:0/411893538' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:01.242052+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.102:0/411893538' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:01.245981+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: x(active, since 3s) 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:01.245981+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: x(active, since 3s) 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:01.601459+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.102:0/3909592986' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:01.601459+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.102:0/3909592986' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-08T23:16:54.665 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:04.370386+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon x restarted 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:04.370386+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon x restarted 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:04.370802+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon x 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:04.370802+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon x 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:04.379722+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:04.379722+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:04.379835+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: x(active, starting, since 0.00911406s) 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:04.379835+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: x(active, starting, since 0.00911406s) 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.382214+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.382214+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.382563+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.382563+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.383388+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.383388+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.383757+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.383757+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.384100+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.384100+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:04.391610+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon x is now available 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:04.391610+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon x is now available 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.400604+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.400604+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.404815+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.404815+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.417252+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.417252+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.417942+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.417942+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.420578+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.420578+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:04.397718+0000 mgr.x (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:04.397718+0000 mgr.x (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.428821+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.428821+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.806197+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.806197+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.808849+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:04.808849+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:05.382284+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: x(active, since 1.01157s) 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:05.382284+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: x(active, since 1.01157s) 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:05.309347+0000 mgr.x (mgr.14118) 2 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Bus STARTING 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:05.309347+0000 mgr.x (mgr.14118) 2 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Bus STARTING 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:05.382646+0000 mgr.x (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:05.382646+0000 mgr.x (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:05.386491+0000 mgr.x (mgr.14118) 4 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:05.386491+0000 mgr.x (mgr.14118) 4 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:05.417129+0000 mgr.x (mgr.14118) 5 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Serving on https://192.168.123.102:7150 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:05.417129+0000 mgr.x (mgr.14118) 5 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Serving on https://192.168.123.102:7150 2026-03-08T23:16:54.666 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:05.417532+0000 mgr.x (mgr.14118) 6 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Client ('192.168.123.102', 54016) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:16:54.667 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:05.417532+0000 mgr.x (mgr.14118) 6 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Client ('192.168.123.102', 54016) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:16:54.667 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:05.518143+0000 mgr.x (mgr.14118) 7 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Serving on http://192.168.123.102:8765 2026-03-08T23:16:54.667 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:05.518143+0000 mgr.x (mgr.14118) 7 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Serving on http://192.168.123.102:8765 2026-03-08T23:16:54.667 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:05.518207+0000 mgr.x (mgr.14118) 8 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Bus STARTED 2026-03-08T23:16:54.667 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:05.518207+0000 mgr.x (mgr.14118) 8 : cephadm [INF] [08/Mar/2026:23:16:05] ENGINE Bus STARTED 2026-03-08T23:16:54.667 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:05.518795+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.667 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:05.518795+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.667 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:05.663248+0000 mgr.x (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.667 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:05.663248+0000 mgr.x (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.667 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:05.666329+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.667 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:05.666329+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.667 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:05.671612+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.667 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:05.671612+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.667 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:05.920730+0000 mgr.x (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.667 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:05.920730+0000 mgr.x (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:06.177524+0000 mgr.x (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:06.177524+0000 mgr.x (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:06.177716+0000 mgr.x (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:06.177716+0000 mgr.x (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:06.192962+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:06.192962+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:06.195067+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:06.195067+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:06.443265+0000 mgr.x (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:06.443265+0000 mgr.x (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:06.707915+0000 mgr.x (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm02", "addr": "192.168.123.102", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:06.707915+0000 mgr.x (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm02", "addr": "192.168.123.102", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:07.205892+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: x(active, since 2s) 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:07.205892+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: x(active, since 2s) 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:07.235035+0000 mgr.x (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm02 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:07.235035+0000 mgr.x (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm02 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:08.494507+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:08.494507+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:08.494975+0000 mgr.x (mgr.14118) 16 : cephadm [INF] Added host vm02 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:08.494975+0000 mgr.x (mgr.14118) 16 : cephadm [INF] Added host vm02 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:08.498029+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:08.498029+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:08.886414+0000 mgr.x (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:08.886414+0000 mgr.x (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:08.887290+0000 mgr.x (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:08.887290+0000 mgr.x (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:08.890107+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:08.890107+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:09.162231+0000 mgr.x (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:09.162231+0000 mgr.x (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:09.162978+0000 mgr.x (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:09.162978+0000 mgr.x (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:09.168664+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:09.168664+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:09.440682+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.102:0/416131322' entity='client.admin' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:09.440682+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.102:0/416131322' entity='client.admin' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:09.711520+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.102:0/423425676' entity='client.admin' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:09.711520+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.102:0/423425676' entity='client.admin' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:10.011374+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:10.011374+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:10.074921+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.102:0/3027471765' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:10.074921+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.102:0/3027471765' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:10.284522+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:10.284522+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.102:0/1235445407' entity='mgr.x' 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:11.012144+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.102:0/3027471765' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:11.012144+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.102:0/3027471765' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-08T23:16:54.668 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:11.014727+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: x(active, since 6s) 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:11.014727+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: x(active, since 6s) 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:11.398240+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.102:0/2390393474' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:11.398240+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.102:0/2390393474' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:14.165958+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon x restarted 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:14.165958+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon x restarted 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:14.166377+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon x 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:14.166377+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon x 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:14.170336+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:14.170336+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:14.170417+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: x(active, starting, since 0.00414788s) 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:14.170417+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: x(active, starting, since 0.00414788s) 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:14.174066+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:14.174066+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:14.174414+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:14.174414+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:14.175325+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:14.175325+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:14.175811+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:14.175811+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:14.176291+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:14.176291+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:14.181988+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon x is now available 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:14.181988+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon x is now available 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:14.200612+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:14.200612+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:14.215100+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:14.215100+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:14.228436+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:14.228436+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:15.173853+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: x(active, since 1.00758s) 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:15.173853+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: x(active, since 1.00758s) 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:15.543714+0000 mgr.x (mgr.14150) 4 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Bus STARTING 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:15.543714+0000 mgr.x (mgr.14150) 4 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Bus STARTING 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:15.608180+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:15.608180+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:15.610229+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:15.610229+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:15.644947+0000 mgr.x (mgr.14150) 5 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Serving on http://192.168.123.102:8765 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:15.644947+0000 mgr.x (mgr.14150) 5 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Serving on http://192.168.123.102:8765 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:15.753932+0000 mgr.x (mgr.14150) 6 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Serving on https://192.168.123.102:7150 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:15.753932+0000 mgr.x (mgr.14150) 6 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Serving on https://192.168.123.102:7150 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:15.753976+0000 mgr.x (mgr.14150) 7 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Bus STARTED 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:15.753976+0000 mgr.x (mgr.14150) 7 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Bus STARTED 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:15.754195+0000 mgr.x (mgr.14150) 8 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Client ('192.168.123.102', 40756) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:15.754195+0000 mgr.x (mgr.14150) 8 : cephadm [INF] [08/Mar/2026:23:16:15] ENGINE Client ('192.168.123.102', 40756) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:15.927223+0000 mgr.x (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:15.927223+0000 mgr.x (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:16.086704+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:16.086704+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:16.489579+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.102:0/2933246231' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:16.489579+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.102:0/2933246231' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:16.845350+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.102:0/103063715' entity='client.admin' 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:16.845350+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.102:0/103063715' entity='client.admin' 2026-03-08T23:16:54.669 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:17.091035+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e12: x(active, since 2s) 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:17.091035+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e12: x(active, since 2s) 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:18.213018+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:18.213018+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:18.756836+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:18.756836+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:20.217819+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: x(active, since 6s) 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:20.217819+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: x(active, since 6s) 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:21.324821+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.102:0/1825097943' entity='client.admin' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:21.324821+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.102:0/1825097943' entity='client.admin' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:24.526626+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:24.526626+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:24.529224+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:24.529224+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:24.529743+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:24.529743+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:24.531918+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:24.531918+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:24.536445+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:24.536445+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:24.538602+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:24.538602+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:25.324252+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:25.324252+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:25.324739+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:25.324739+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:25.325498+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:25.325498+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:25.325839+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:25.325839+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:25.449379+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:25.449379+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:25.451837+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:25.451837+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:25.454570+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:25.454570+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:25.321700+0000 mgr.x (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:25.321700+0000 mgr.x (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:25.326321+0000 mgr.x (mgr.14150) 11 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:25.326321+0000 mgr.x (mgr.14150) 11 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:25.356704+0000 mgr.x (mgr.14150) 12 : cephadm [INF] Updating vm02:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:25.356704+0000 mgr.x (mgr.14150) 12 : cephadm [INF] Updating vm02:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:25.394153+0000 mgr.x (mgr.14150) 13 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:25.394153+0000 mgr.x (mgr.14150) 13 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:25.420954+0000 mgr.x (mgr.14150) 14 : cephadm [INF] Updating vm02:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.client.admin.keyring 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:25.420954+0000 mgr.x (mgr.14150) 14 : cephadm [INF] Updating vm02:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.client.admin.keyring 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:29.390707+0000 mgr.x (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:29.390707+0000 mgr.x (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:29.923666+0000 mgr.x (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:29.923666+0000 mgr.x (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:31.164817+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:31.164817+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.670 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:31.165226+0000 mgr.x (mgr.14150) 17 : cephadm [INF] Added host vm04 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:31.165226+0000 mgr.x (mgr.14150) 17 : cephadm [INF] Added host vm04 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:31.165494+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:31.165494+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:31.447894+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:31.447894+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:32.723568+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:32.723568+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:33.246650+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:33.246650+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:34.176227+0000 mgr.x (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:34.176227+0000 mgr.x (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:35.998706+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:35.998706+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.000944+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.000944+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.003931+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.003931+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.005868+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.005868+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.006522+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.006522+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.007134+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.007134+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.007520+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.007520+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:36.008065+0000 mgr.x (mgr.14150) 19 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:36.008065+0000 mgr.x (mgr.14150) 19 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:36.042393+0000 mgr.x (mgr.14150) 20 : cephadm [INF] Updating vm04:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:36.042393+0000 mgr.x (mgr.14150) 20 : cephadm [INF] Updating vm04:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.153936+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.153936+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.156517+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.156517+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.158932+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.158932+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:36.073195+0000 mgr.x (mgr.14150) 21 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:36.073195+0000 mgr.x (mgr.14150) 21 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.084812+0000 mgr.x (mgr.14150) 22 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:36.084812+0000 mgr.x (mgr.14150) 22 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:36.110312+0000 mgr.x (mgr.14150) 23 : cephadm [INF] Updating vm04:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.client.admin.keyring 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:36.110312+0000 mgr.x (mgr.14150) 23 : cephadm [INF] Updating vm04:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.client.admin.keyring 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:36.176415+0000 mgr.x (mgr.14150) 24 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:36.176415+0000 mgr.x (mgr.14150) 24 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:38.176609+0000 mgr.x (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:38.176609+0000 mgr.x (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:40.097104+0000 mgr.x (mgr.14150) 26 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm10", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:40.097104+0000 mgr.x (mgr.14150) 26 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm10", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:40.176756+0000 mgr.x (mgr.14150) 27 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:40.176756+0000 mgr.x (mgr.14150) 27 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:40.667881+0000 mgr.x (mgr.14150) 28 : cephadm [INF] Deploying cephadm binary to vm10 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:40.667881+0000 mgr.x (mgr.14150) 28 : cephadm [INF] Deploying cephadm binary to vm10 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:41.975064+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:41.975064+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:41.975797+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:41.975797+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:41.975449+0000 mgr.x (mgr.14150) 29 : cephadm [INF] Added host vm10 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:41.975449+0000 mgr.x (mgr.14150) 29 : cephadm [INF] Added host vm10 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:42.266437+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:42.266437+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.671 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:42.176932+0000 mgr.x (mgr.14150) 30 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:42.176932+0000 mgr.x (mgr.14150) 30 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:43.550780+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:43.550780+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:44.125037+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:44.125037+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:44.177085+0000 mgr.x (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:44.177085+0000 mgr.x (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:46.177247+0000 mgr.x (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:46.177247+0000 mgr.x (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:46.929017+0000 mgr.x (mgr.14150) 33 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:46.929017+0000 mgr.x (mgr.14150) 33 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.110757+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.110757+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.112598+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.112598+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.114966+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.114966+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.116655+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.116655+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.117104+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.117104+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm10", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.117682+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.117682+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.118077+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.118077+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:47.118651+0000 mgr.x (mgr.14150) 34 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:47.118651+0000 mgr.x (mgr.14150) 34 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:47.153607+0000 mgr.x (mgr.14150) 35 : cephadm [INF] Updating vm10:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:47.153607+0000 mgr.x (mgr.14150) 35 : cephadm [INF] Updating vm10:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:47.182647+0000 mgr.x (mgr.14150) 36 : cephadm [INF] Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:47.182647+0000 mgr.x (mgr.14150) 36 : cephadm [INF] Updating vm10:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:47.216316+0000 mgr.x (mgr.14150) 37 : cephadm [INF] Updating vm10:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.client.admin.keyring 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:47.216316+0000 mgr.x (mgr.14150) 37 : cephadm [INF] Updating vm10:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.client.admin.keyring 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.252801+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.252801+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.255706+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.255706+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.258701+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:47.258701+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:48.177427+0000 mgr.x (mgr.14150) 38 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:48.177427+0000 mgr.x (mgr.14150) 38 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:50.177591+0000 mgr.x (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:50.177591+0000 mgr.x (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:50.903928+0000 mon.a (mon.0) 140 : audit [INF] from='client.? 192.168.123.102:0/1938167508' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:50.903928+0000 mon.a (mon.0) 140 : audit [INF] from='client.? 192.168.123.102:0/1938167508' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:51.270066+0000 mon.a (mon.0) 141 : audit [INF] from='client.? 192.168.123.102:0/1938167508' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:51.270066+0000 mon.a (mon.0) 141 : audit [INF] from='client.? 192.168.123.102:0/1938167508' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:51.271810+0000 mon.a (mon.0) 142 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:51.271810+0000 mon.a (mon.0) 142 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:52.177791+0000 mgr.x (mgr.14150) 40 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cluster 2026-03-08T23:16:52.177791+0000 mgr.x (mgr.14150) 40 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:52.691821+0000 mgr.x (mgr.14150) 41 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm02:192.168.123.102=a;vm04:192.168.123.104=b;vm10:192.168.123.110=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:52.691821+0000 mgr.x (mgr.14150) 41 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm02:192.168.123.102=a;vm04:192.168.123.104=b;vm10:192.168.123.110=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:52.692857+0000 mgr.x (mgr.14150) 42 : cephadm [INF] Saving service mon spec with placement vm02:192.168.123.102=a;vm04:192.168.123.104=b;vm10:192.168.123.110=c;count:3 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:52.692857+0000 mgr.x (mgr.14150) 42 : cephadm [INF] Saving service mon spec with placement vm02:192.168.123.102=a;vm04:192.168.123.104=b;vm10:192.168.123.110=c;count:3 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:52.696311+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:52.696311+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:52.697334+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:52.697334+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:54.672 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:52.698540+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:54.673 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:52.698540+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:54.673 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:52.699190+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:16:54.673 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:52.699190+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:16:54.673 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:52.702139+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.673 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:52.702139+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:54.673 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:52.703431+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:16:54.673 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:52.703431+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:16:54.673 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:52.704083+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:54.673 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: audit 2026-03-08T23:16:52.704083+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:16:54.673 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:52.704749+0000 mgr.x (mgr.14150) 43 : cephadm [INF] Deploying daemon mon.c on vm10 2026-03-08T23:16:54.673 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: cephadm 2026-03-08T23:16:52.704749+0000 mgr.x (mgr.14150) 43 : cephadm [INF] Deploying daemon mon.c on vm10 2026-03-08T23:16:54.673 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:54 vm10 bash[20034]: debug 2026-03-08T23:16:54.460+0000 7f92ebee8640 1 mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-08T23:16:56.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:16:55 vm04 bash[19918]: debug 2026-03-08T23:16:55.858+0000 7f4e2d39b640 1 mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-08T23:16:59.483 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 2 2026-03-08T23:16:59.483 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:16:59.483 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":2,"fsid":"91105a84-1b44-11f1-9a43-e95894f13987","modified":"2026-03-08T23:16:54.468981Z","created":"2026-03-08T23:15:51.971315Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:3300","nonce":0},{"type":"v1","addr":"192.168.123.110:6789","nonce":0}]},"addr":"192.168.123.110:6789/0","public_addr":"192.168.123.110:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:54.177976+0000 mgr.x (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:54.177976+0000 mgr.x (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cephadm 2026-03-08T23:16:54.285052+0000 mgr.x (mgr.14150) 45 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cephadm 2026-03-08T23:16:54.285052+0000 mgr.x (mgr.14150) 45 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:54.471822+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:54.471822+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:54.472540+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:54.472540+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:54.474439+0000 mon.a (mon.0) 158 : cluster [INF] mon.a calling monitor election 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:54.474439+0000 mon.a (mon.0) 158 : cluster [INF] mon.a calling monitor election 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:54.519658+0000 mon.a (mon.0) 159 : audit [DBG] from='client.? 192.168.123.110:0/2622695739' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:54.519658+0000 mon.a (mon.0) 159 : audit [DBG] from='client.? 192.168.123.110:0/2622695739' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:55.467947+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:55.467947+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:55.867963+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:55.867963+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:16:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:56.178129+0000 mgr.x (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:56.178129+0000 mgr.x (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:56.468353+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:56.468353+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:56.470215+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:56.470215+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:56.867722+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:56.867722+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:57.468599+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:57.468599+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:57.867803+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:57.867803+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:58.178277+0000 mgr.x (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:58.178277+0000 mgr.x (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:58.468422+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:58.468422+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:58.868061+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:58.868061+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:59.468531+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:59.468531+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.479257+0000 mon.a (mon.0) 169 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.479257+0000 mon.a (mon.0) 169 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483062+0000 mon.a (mon.0) 170 : cluster [DBG] monmap epoch 2 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483062+0000 mon.a (mon.0) 170 : cluster [DBG] monmap epoch 2 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483079+0000 mon.a (mon.0) 171 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483079+0000 mon.a (mon.0) 171 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483093+0000 mon.a (mon.0) 172 : cluster [DBG] last_changed 2026-03-08T23:16:54.468981+0000 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483093+0000 mon.a (mon.0) 172 : cluster [DBG] last_changed 2026-03-08T23:16:54.468981+0000 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483101+0000 mon.a (mon.0) 173 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483101+0000 mon.a (mon.0) 173 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483111+0000 mon.a (mon.0) 174 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483111+0000 mon.a (mon.0) 174 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483119+0000 mon.a (mon.0) 175 : cluster [DBG] election_strategy: 1 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483119+0000 mon.a (mon.0) 175 : cluster [DBG] election_strategy: 1 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483128+0000 mon.a (mon.0) 176 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483128+0000 mon.a (mon.0) 176 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483137+0000 mon.a (mon.0) 177 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483137+0000 mon.a (mon.0) 177 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483400+0000 mon.a (mon.0) 178 : cluster [DBG] fsmap 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483400+0000 mon.a (mon.0) 178 : cluster [DBG] fsmap 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483421+0000 mon.a (mon.0) 179 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483421+0000 mon.a (mon.0) 179 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483528+0000 mon.a (mon.0) 180 : cluster [DBG] mgrmap e13: x(active, since 45s) 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483528+0000 mon.a (mon.0) 180 : cluster [DBG] mgrmap e13: x(active, since 45s) 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483613+0000 mon.a (mon.0) 181 : cluster [INF] overall HEALTH_OK 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: cluster 2026-03-08T23:16:59.483613+0000 mon.a (mon.0) 181 : cluster [INF] overall HEALTH_OK 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:59.488428+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:59.488428+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:59.491612+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:59.491612+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:59.497815+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:59.497815+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:59.501066+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:59.501066+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:59.515369+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:59.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:16:59 vm02 bash[17457]: audit 2026-03-08T23:16:59.515369+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:54.177976+0000 mgr.x (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:54.177976+0000 mgr.x (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cephadm 2026-03-08T23:16:54.285052+0000 mgr.x (mgr.14150) 45 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-08T23:16:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cephadm 2026-03-08T23:16:54.285052+0000 mgr.x (mgr.14150) 45 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-08T23:16:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:54.471822+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:16:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:54.471822+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:16:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:54.472540+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:54.472540+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:54.474439+0000 mon.a (mon.0) 158 : cluster [INF] mon.a calling monitor election 2026-03-08T23:16:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:54.474439+0000 mon.a (mon.0) 158 : cluster [INF] mon.a calling monitor election 2026-03-08T23:16:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:54.519658+0000 mon.a (mon.0) 159 : audit [DBG] from='client.? 192.168.123.110:0/2622695739' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T23:16:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:54.519658+0000 mon.a (mon.0) 159 : audit [DBG] from='client.? 192.168.123.110:0/2622695739' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T23:16:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:55.467947+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:55.467947+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:55.867963+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:55.867963+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:56.178129+0000 mgr.x (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:56.178129+0000 mgr.x (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:56.468353+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:56.468353+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:56.470215+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:56.470215+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:56.867722+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:56.867722+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:57.468599+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:57.468599+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:57.867803+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:57.867803+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:58.178277+0000 mgr.x (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:58.178277+0000 mgr.x (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:58.468422+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:58.468422+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:58.868061+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:58.868061+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:59.468531+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:59.468531+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.479257+0000 mon.a (mon.0) 169 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.479257+0000 mon.a (mon.0) 169 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483062+0000 mon.a (mon.0) 170 : cluster [DBG] monmap epoch 2 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483062+0000 mon.a (mon.0) 170 : cluster [DBG] monmap epoch 2 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483079+0000 mon.a (mon.0) 171 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483079+0000 mon.a (mon.0) 171 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483093+0000 mon.a (mon.0) 172 : cluster [DBG] last_changed 2026-03-08T23:16:54.468981+0000 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483093+0000 mon.a (mon.0) 172 : cluster [DBG] last_changed 2026-03-08T23:16:54.468981+0000 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483101+0000 mon.a (mon.0) 173 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483101+0000 mon.a (mon.0) 173 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483111+0000 mon.a (mon.0) 174 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483111+0000 mon.a (mon.0) 174 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483119+0000 mon.a (mon.0) 175 : cluster [DBG] election_strategy: 1 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483119+0000 mon.a (mon.0) 175 : cluster [DBG] election_strategy: 1 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483128+0000 mon.a (mon.0) 176 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483128+0000 mon.a (mon.0) 176 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483137+0000 mon.a (mon.0) 177 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483137+0000 mon.a (mon.0) 177 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483400+0000 mon.a (mon.0) 178 : cluster [DBG] fsmap 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483400+0000 mon.a (mon.0) 178 : cluster [DBG] fsmap 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483421+0000 mon.a (mon.0) 179 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483421+0000 mon.a (mon.0) 179 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483528+0000 mon.a (mon.0) 180 : cluster [DBG] mgrmap e13: x(active, since 45s) 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483528+0000 mon.a (mon.0) 180 : cluster [DBG] mgrmap e13: x(active, since 45s) 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483613+0000 mon.a (mon.0) 181 : cluster [INF] overall HEALTH_OK 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: cluster 2026-03-08T23:16:59.483613+0000 mon.a (mon.0) 181 : cluster [INF] overall HEALTH_OK 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:59.488428+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:59.488428+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:59.491612+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:59.491612+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:59.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:59.497815+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:59.909 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:59.497815+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:59.909 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:59.501066+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:59.909 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:59.501066+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:16:59.909 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:59.515369+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:16:59.909 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:16:59 vm10 bash[20034]: audit 2026-03-08T23:16:59.515369+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:00.562 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-08T23:17:00.562 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph mon dump -f json 2026-03-08T23:17:00.893 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:17:00 vm02 bash[17721]: debug 2026-03-08T23:17:00.463+0000 7f2040bbb640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-08T23:17:04.287 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.c/config 2026-03-08T23:17:05.190 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:17:05.190 INFO:teuthology.orchestra.run.vm10.stdout:{"epoch":3,"fsid":"91105a84-1b44-11f1-9a43-e95894f13987","modified":"2026-03-08T23:16:59.868626Z","created":"2026-03-08T23:15:51.971315Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:3300","nonce":0},{"type":"v1","addr":"192.168.123.110:6789","nonce":0}]},"addr":"192.168.123.110:6789/0","public_addr":"192.168.123.110:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3300","nonce":0},{"type":"v1","addr":"192.168.123.104:6789","nonce":0}]},"addr":"192.168.123.104:6789/0","public_addr":"192.168.123.104:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-08T23:17:05.190 INFO:teuthology.orchestra.run.vm10.stderr:dumped monmap epoch 3 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:16:59.872463+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:16:59.872463+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:16:59.872806+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:16:59.872806+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:16:59.872873+0000 mon.a (mon.0) 190 : cluster [INF] mon.a calling monitor election 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:16:59.872873+0000 mon.a (mon.0) 190 : cluster [INF] mon.a calling monitor election 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:16:59.873823+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:16:59.873823+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:16:59.874911+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:16:59.874911+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:00.178422+0000 mgr.x (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:00.178422+0000 mgr.x (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:00.868194+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:00.868194+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:01.868077+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:01.868077+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:02.178633+0000 mgr.x (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:02.178633+0000 mgr.x (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:02.868588+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:02.868588+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:03.868493+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:03.868493+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.200 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:04.868604+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:04.868604+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.874436+0000 mon.a (mon.0) 197 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.874436+0000 mon.a (mon.0) 197 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877381+0000 mon.a (mon.0) 198 : cluster [DBG] monmap epoch 3 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877381+0000 mon.a (mon.0) 198 : cluster [DBG] monmap epoch 3 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877390+0000 mon.a (mon.0) 199 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877390+0000 mon.a (mon.0) 199 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877394+0000 mon.a (mon.0) 200 : cluster [DBG] last_changed 2026-03-08T23:16:59.868626+0000 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877394+0000 mon.a (mon.0) 200 : cluster [DBG] last_changed 2026-03-08T23:16:59.868626+0000 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877397+0000 mon.a (mon.0) 201 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877397+0000 mon.a (mon.0) 201 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877400+0000 mon.a (mon.0) 202 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877400+0000 mon.a (mon.0) 202 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877403+0000 mon.a (mon.0) 203 : cluster [DBG] election_strategy: 1 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877403+0000 mon.a (mon.0) 203 : cluster [DBG] election_strategy: 1 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877408+0000 mon.a (mon.0) 204 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877408+0000 mon.a (mon.0) 204 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877411+0000 mon.a (mon.0) 205 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877411+0000 mon.a (mon.0) 205 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877414+0000 mon.a (mon.0) 206 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877414+0000 mon.a (mon.0) 206 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877827+0000 mon.a (mon.0) 207 : cluster [DBG] fsmap 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877827+0000 mon.a (mon.0) 207 : cluster [DBG] fsmap 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877846+0000 mon.a (mon.0) 208 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877846+0000 mon.a (mon.0) 208 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877967+0000 mon.a (mon.0) 209 : cluster [DBG] mgrmap e13: x(active, since 50s) 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.877967+0000 mon.a (mon.0) 209 : cluster [DBG] mgrmap e13: x(active, since 50s) 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.878086+0000 mon.a (mon.0) 210 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN) 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.878086+0000 mon.a (mon.0) 210 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN) 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.880718+0000 mon.a (mon.0) 211 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.880718+0000 mon.a (mon.0) 211 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.880774+0000 mon.a (mon.0) 212 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,c 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.880774+0000 mon.a (mon.0) 212 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,c 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.880794+0000 mon.a (mon.0) 213 : cluster [WRN] mon.b (rank 2) addr [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] is down (out of quorum) 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: cluster 2026-03-08T23:17:04.880794+0000 mon.a (mon.0) 213 : cluster [WRN] mon.b (rank 2) addr [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] is down (out of quorum) 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:04.884032+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:04.884032+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:04.886737+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:04.886737+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:04.889227+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:04.889227+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:04.903429+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:04.903429+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:04.907019+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:04.907019+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:04.907652+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:04.907652+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:04.908089+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:05.201 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:04 vm10 bash[20034]: audit 2026-03-08T23:17:04.908089+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:05.248 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-08T23:17:05.248 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph config generate-minimal-conf 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:16:59.872463+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:16:59.872463+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:16:59.872806+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:16:59.872806+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:16:59.872873+0000 mon.a (mon.0) 190 : cluster [INF] mon.a calling monitor election 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:16:59.872873+0000 mon.a (mon.0) 190 : cluster [INF] mon.a calling monitor election 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:16:59.873823+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:16:59.873823+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:16:59.874911+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:16:59.874911+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:00.178422+0000 mgr.x (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:00.178422+0000 mgr.x (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:00.868194+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:00.868194+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:01.868077+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:01.868077+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:02.178633+0000 mgr.x (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:02.178633+0000 mgr.x (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:02.868588+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:02.868588+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:03.868493+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:03.868493+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:04.868604+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:04.868604+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.874436+0000 mon.a (mon.0) 197 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.874436+0000 mon.a (mon.0) 197 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877381+0000 mon.a (mon.0) 198 : cluster [DBG] monmap epoch 3 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877381+0000 mon.a (mon.0) 198 : cluster [DBG] monmap epoch 3 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877390+0000 mon.a (mon.0) 199 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877390+0000 mon.a (mon.0) 199 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877394+0000 mon.a (mon.0) 200 : cluster [DBG] last_changed 2026-03-08T23:16:59.868626+0000 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877394+0000 mon.a (mon.0) 200 : cluster [DBG] last_changed 2026-03-08T23:16:59.868626+0000 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877397+0000 mon.a (mon.0) 201 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877397+0000 mon.a (mon.0) 201 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877400+0000 mon.a (mon.0) 202 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:17:05.254 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877400+0000 mon.a (mon.0) 202 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877403+0000 mon.a (mon.0) 203 : cluster [DBG] election_strategy: 1 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877403+0000 mon.a (mon.0) 203 : cluster [DBG] election_strategy: 1 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877408+0000 mon.a (mon.0) 204 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877408+0000 mon.a (mon.0) 204 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877411+0000 mon.a (mon.0) 205 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877411+0000 mon.a (mon.0) 205 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877414+0000 mon.a (mon.0) 206 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877414+0000 mon.a (mon.0) 206 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877827+0000 mon.a (mon.0) 207 : cluster [DBG] fsmap 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877827+0000 mon.a (mon.0) 207 : cluster [DBG] fsmap 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877846+0000 mon.a (mon.0) 208 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877846+0000 mon.a (mon.0) 208 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877967+0000 mon.a (mon.0) 209 : cluster [DBG] mgrmap e13: x(active, since 50s) 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.877967+0000 mon.a (mon.0) 209 : cluster [DBG] mgrmap e13: x(active, since 50s) 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.878086+0000 mon.a (mon.0) 210 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN) 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.878086+0000 mon.a (mon.0) 210 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN) 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.880718+0000 mon.a (mon.0) 211 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.880718+0000 mon.a (mon.0) 211 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.880774+0000 mon.a (mon.0) 212 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,c 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.880774+0000 mon.a (mon.0) 212 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,c 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.880794+0000 mon.a (mon.0) 213 : cluster [WRN] mon.b (rank 2) addr [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] is down (out of quorum) 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: cluster 2026-03-08T23:17:04.880794+0000 mon.a (mon.0) 213 : cluster [WRN] mon.b (rank 2) addr [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] is down (out of quorum) 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:04.884032+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:04.884032+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:04.886737+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:04.886737+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:04.889227+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:04.889227+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:04.903429+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:04.903429+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:04.907019+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:04.907019+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:04.907652+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:04.907652+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:04.908089+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:05.255 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:04 vm02 bash[17457]: audit 2026-03-08T23:17:04.908089+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: cluster 2026-03-08T23:17:04.178861+0000 mgr.x (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: cluster 2026-03-08T23:17:04.178861+0000 mgr.x (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: cephadm 2026-03-08T23:17:04.908688+0000 mgr.x (mgr.14150) 51 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: cephadm 2026-03-08T23:17:04.908688+0000 mgr.x (mgr.14150) 51 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: cephadm 2026-03-08T23:17:04.908816+0000 mgr.x (mgr.14150) 52 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: cephadm 2026-03-08T23:17:04.908816+0000 mgr.x (mgr.14150) 52 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: cephadm 2026-03-08T23:17:04.908902+0000 mgr.x (mgr.14150) 53 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: cephadm 2026-03-08T23:17:04.908902+0000 mgr.x (mgr.14150) 53 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: cephadm 2026-03-08T23:17:04.963547+0000 mgr.x (mgr.14150) 54 : cephadm [INF] Updating vm02:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: cephadm 2026-03-08T23:17:04.963547+0000 mgr.x (mgr.14150) 54 : cephadm [INF] Updating vm02:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: cephadm 2026-03-08T23:17:04.965147+0000 mgr.x (mgr.14150) 55 : cephadm [INF] Updating vm10:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: cephadm 2026-03-08T23:17:04.965147+0000 mgr.x (mgr.14150) 55 : cephadm [INF] Updating vm10:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: cephadm 2026-03-08T23:17:04.967544+0000 mgr.x (mgr.14150) 56 : cephadm [INF] Updating vm04:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: cephadm 2026-03-08T23:17:04.967544+0000 mgr.x (mgr.14150) 56 : cephadm [INF] Updating vm04:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.011698+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.011698+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.015839+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.015839+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.019061+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.019061+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.021824+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.021824+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.034777+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.034777+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.037481+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.037481+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.043696+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.043696+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.065271+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.065271+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.068348+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.068348+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.071264+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.071264+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.074327+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.074327+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.075104+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.075104+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.075675+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.075675+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.076086+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.076086+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.190383+0000 mon.a (mon.0) 235 : audit [DBG] from='client.? 192.168.123.110:0/2815658055' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.190383+0000 mon.a (mon.0) 235 : audit [DBG] from='client.? 192.168.123.110:0/2815658055' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.456311+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.456311+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.460280+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.460280+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.461170+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.461170+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.461637+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.461637+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.462029+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.462029+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.851870+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.851870+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.856200+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.856200+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.857171+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.857171+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.857622+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.857622+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.858007+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.858007+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.868619+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:06 vm02 bash[17457]: audit 2026-03-08T23:17:05.868619+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: cluster 2026-03-08T23:17:04.178861+0000 mgr.x (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: cluster 2026-03-08T23:17:04.178861+0000 mgr.x (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: cephadm 2026-03-08T23:17:04.908688+0000 mgr.x (mgr.14150) 51 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: cephadm 2026-03-08T23:17:04.908688+0000 mgr.x (mgr.14150) 51 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: cephadm 2026-03-08T23:17:04.908816+0000 mgr.x (mgr.14150) 52 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: cephadm 2026-03-08T23:17:04.908816+0000 mgr.x (mgr.14150) 52 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: cephadm 2026-03-08T23:17:04.908902+0000 mgr.x (mgr.14150) 53 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: cephadm 2026-03-08T23:17:04.908902+0000 mgr.x (mgr.14150) 53 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: cephadm 2026-03-08T23:17:04.963547+0000 mgr.x (mgr.14150) 54 : cephadm [INF] Updating vm02:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: cephadm 2026-03-08T23:17:04.963547+0000 mgr.x (mgr.14150) 54 : cephadm [INF] Updating vm02:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: cephadm 2026-03-08T23:17:04.965147+0000 mgr.x (mgr.14150) 55 : cephadm [INF] Updating vm10:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: cephadm 2026-03-08T23:17:04.965147+0000 mgr.x (mgr.14150) 55 : cephadm [INF] Updating vm10:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: cephadm 2026-03-08T23:17:04.967544+0000 mgr.x (mgr.14150) 56 : cephadm [INF] Updating vm04:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: cephadm 2026-03-08T23:17:04.967544+0000 mgr.x (mgr.14150) 56 : cephadm [INF] Updating vm04:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.011698+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.011698+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.015839+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.015839+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.019061+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.019061+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.021824+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.021824+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.034777+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.034777+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.037481+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.037481+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.043696+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.043696+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.065271+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.065271+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.068348+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.068348+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.071264+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.071264+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.074327+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.074327+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.075104+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.075104+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.075675+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.075675+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.076086+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.076086+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.190383+0000 mon.a (mon.0) 235 : audit [DBG] from='client.? 192.168.123.110:0/2815658055' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.190383+0000 mon.a (mon.0) 235 : audit [DBG] from='client.? 192.168.123.110:0/2815658055' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.456311+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.456311+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.460280+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.460280+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.461170+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.461170+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.461637+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.461637+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.462029+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.462029+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.851870+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.851870+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.856200+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.856200+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.857171+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.857171+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.857622+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.857622+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.858007+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.858007+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.868619+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:06.409 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:06 vm10 bash[20034]: audit 2026-03-08T23:17:05.868619+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:54.177976+0000 mgr.x (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:54.177976+0000 mgr.x (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cephadm 2026-03-08T23:16:54.285052+0000 mgr.x (mgr.14150) 45 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cephadm 2026-03-08T23:16:54.285052+0000 mgr.x (mgr.14150) 45 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:54.471822+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:54.471822+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:54.472540+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:54.472540+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:54.474439+0000 mon.a (mon.0) 158 : cluster [INF] mon.a calling monitor election 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:54.474439+0000 mon.a (mon.0) 158 : cluster [INF] mon.a calling monitor election 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:54.519658+0000 mon.a (mon.0) 159 : audit [DBG] from='client.? 192.168.123.110:0/2622695739' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:54.519658+0000 mon.a (mon.0) 159 : audit [DBG] from='client.? 192.168.123.110:0/2622695739' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:55.467947+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:55.467947+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:55.867963+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:55.867963+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:56.178129+0000 mgr.x (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:56.178129+0000 mgr.x (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:56.468353+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:56.468353+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:56.470215+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:56.470215+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:56.867722+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:56.867722+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.276 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:57.468599+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:57.468599+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:57.867803+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:57.867803+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:58.178277+0000 mgr.x (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:58.178277+0000 mgr.x (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:58.468422+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:58.468422+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:58.868061+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:58.868061+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.468531+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.468531+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.479257+0000 mon.a (mon.0) 169 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.479257+0000 mon.a (mon.0) 169 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483062+0000 mon.a (mon.0) 170 : cluster [DBG] monmap epoch 2 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483062+0000 mon.a (mon.0) 170 : cluster [DBG] monmap epoch 2 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483079+0000 mon.a (mon.0) 171 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483079+0000 mon.a (mon.0) 171 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483093+0000 mon.a (mon.0) 172 : cluster [DBG] last_changed 2026-03-08T23:16:54.468981+0000 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483093+0000 mon.a (mon.0) 172 : cluster [DBG] last_changed 2026-03-08T23:16:54.468981+0000 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483101+0000 mon.a (mon.0) 173 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483101+0000 mon.a (mon.0) 173 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483111+0000 mon.a (mon.0) 174 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483111+0000 mon.a (mon.0) 174 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483119+0000 mon.a (mon.0) 175 : cluster [DBG] election_strategy: 1 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483119+0000 mon.a (mon.0) 175 : cluster [DBG] election_strategy: 1 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483128+0000 mon.a (mon.0) 176 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483128+0000 mon.a (mon.0) 176 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483137+0000 mon.a (mon.0) 177 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483137+0000 mon.a (mon.0) 177 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483400+0000 mon.a (mon.0) 178 : cluster [DBG] fsmap 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483400+0000 mon.a (mon.0) 178 : cluster [DBG] fsmap 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483421+0000 mon.a (mon.0) 179 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483421+0000 mon.a (mon.0) 179 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483528+0000 mon.a (mon.0) 180 : cluster [DBG] mgrmap e13: x(active, since 45s) 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483528+0000 mon.a (mon.0) 180 : cluster [DBG] mgrmap e13: x(active, since 45s) 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483613+0000 mon.a (mon.0) 181 : cluster [INF] overall HEALTH_OK 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.483613+0000 mon.a (mon.0) 181 : cluster [INF] overall HEALTH_OK 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.488428+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.488428+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.277 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.491612+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.491612+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.497815+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.497815+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.501066+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.501066+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.515369+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.515369+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.872463+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.872463+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.872806+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.872806+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.872873+0000 mon.a (mon.0) 190 : cluster [INF] mon.a calling monitor election 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.872873+0000 mon.a (mon.0) 190 : cluster [INF] mon.a calling monitor election 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.873823+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:16:59.873823+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.874911+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:16:59.874911+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:00.178422+0000 mgr.x (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:00.178422+0000 mgr.x (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:00.868194+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:00.868194+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:01.868077+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:01.868077+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:02.178633+0000 mgr.x (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:02.178633+0000 mgr.x (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:02.868588+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:02.868588+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:03.868493+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:03.868493+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:04.868604+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:04.868604+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.874436+0000 mon.a (mon.0) 197 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.874436+0000 mon.a (mon.0) 197 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877381+0000 mon.a (mon.0) 198 : cluster [DBG] monmap epoch 3 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877381+0000 mon.a (mon.0) 198 : cluster [DBG] monmap epoch 3 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877390+0000 mon.a (mon.0) 199 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877390+0000 mon.a (mon.0) 199 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877394+0000 mon.a (mon.0) 200 : cluster [DBG] last_changed 2026-03-08T23:16:59.868626+0000 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877394+0000 mon.a (mon.0) 200 : cluster [DBG] last_changed 2026-03-08T23:16:59.868626+0000 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877397+0000 mon.a (mon.0) 201 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877397+0000 mon.a (mon.0) 201 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877400+0000 mon.a (mon.0) 202 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877400+0000 mon.a (mon.0) 202 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877403+0000 mon.a (mon.0) 203 : cluster [DBG] election_strategy: 1 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877403+0000 mon.a (mon.0) 203 : cluster [DBG] election_strategy: 1 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877408+0000 mon.a (mon.0) 204 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877408+0000 mon.a (mon.0) 204 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877411+0000 mon.a (mon.0) 205 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877411+0000 mon.a (mon.0) 205 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877414+0000 mon.a (mon.0) 206 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877414+0000 mon.a (mon.0) 206 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877827+0000 mon.a (mon.0) 207 : cluster [DBG] fsmap 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877827+0000 mon.a (mon.0) 207 : cluster [DBG] fsmap 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877846+0000 mon.a (mon.0) 208 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:17:07.278 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877846+0000 mon.a (mon.0) 208 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877967+0000 mon.a (mon.0) 209 : cluster [DBG] mgrmap e13: x(active, since 50s) 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.877967+0000 mon.a (mon.0) 209 : cluster [DBG] mgrmap e13: x(active, since 50s) 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.878086+0000 mon.a (mon.0) 210 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN) 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.878086+0000 mon.a (mon.0) 210 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN) 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.880718+0000 mon.a (mon.0) 211 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.880718+0000 mon.a (mon.0) 211 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.880774+0000 mon.a (mon.0) 212 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,c 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.880774+0000 mon.a (mon.0) 212 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,c 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.880794+0000 mon.a (mon.0) 213 : cluster [WRN] mon.b (rank 2) addr [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] is down (out of quorum) 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.880794+0000 mon.a (mon.0) 213 : cluster [WRN] mon.b (rank 2) addr [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] is down (out of quorum) 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:04.884032+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:04.884032+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:04.886737+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:04.886737+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:04.889227+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:04.889227+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:04.903429+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:04.903429+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:04.907019+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:04.907019+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:04.907652+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:04.907652+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:04.908089+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:04.908089+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.178861+0000 mgr.x (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cluster 2026-03-08T23:17:04.178861+0000 mgr.x (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cephadm 2026-03-08T23:17:04.908688+0000 mgr.x (mgr.14150) 51 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cephadm 2026-03-08T23:17:04.908688+0000 mgr.x (mgr.14150) 51 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cephadm 2026-03-08T23:17:04.908816+0000 mgr.x (mgr.14150) 52 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cephadm 2026-03-08T23:17:04.908816+0000 mgr.x (mgr.14150) 52 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cephadm 2026-03-08T23:17:04.908902+0000 mgr.x (mgr.14150) 53 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cephadm 2026-03-08T23:17:04.908902+0000 mgr.x (mgr.14150) 53 : cephadm [INF] Updating vm10:/etc/ceph/ceph.conf 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cephadm 2026-03-08T23:17:04.963547+0000 mgr.x (mgr.14150) 54 : cephadm [INF] Updating vm02:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cephadm 2026-03-08T23:17:04.963547+0000 mgr.x (mgr.14150) 54 : cephadm [INF] Updating vm02:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cephadm 2026-03-08T23:17:04.965147+0000 mgr.x (mgr.14150) 55 : cephadm [INF] Updating vm10:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cephadm 2026-03-08T23:17:04.965147+0000 mgr.x (mgr.14150) 55 : cephadm [INF] Updating vm10:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cephadm 2026-03-08T23:17:04.967544+0000 mgr.x (mgr.14150) 56 : cephadm [INF] Updating vm04:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: cephadm 2026-03-08T23:17:04.967544+0000 mgr.x (mgr.14150) 56 : cephadm [INF] Updating vm04:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/config/ceph.conf 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.011698+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.011698+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.015839+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.015839+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.019061+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.019061+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.021824+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.021824+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.034777+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.034777+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.037481+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.037481+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.043696+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.043696+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.065271+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.065271+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.068348+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.068348+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.071264+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.071264+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.074327+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.074327+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.279 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.075104+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.075104+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.075675+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.075675+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.076086+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.076086+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.190383+0000 mon.a (mon.0) 235 : audit [DBG] from='client.? 192.168.123.110:0/2815658055' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.190383+0000 mon.a (mon.0) 235 : audit [DBG] from='client.? 192.168.123.110:0/2815658055' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.456311+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.456311+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.460280+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.460280+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.461170+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.461170+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.461637+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.461637+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.462029+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.462029+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.851870+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.851870+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.856200+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.856200+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.857171+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.857171+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.857622+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.857622+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.858007+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.858007+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.868619+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:06 vm04 bash[19918]: audit 2026-03-08T23:17:05.868619+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:01.870000+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-08T23:17:07.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:01.870000+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.179040+0000 mgr.x (mgr.14150) 63 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.179040+0000 mgr.x (mgr.14150) 63 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.887691+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.887691+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.888695+0000 mon.a (mon.0) 254 : cluster [INF] mon.a calling monitor election 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.888695+0000 mon.a (mon.0) 254 : cluster [INF] mon.a calling monitor election 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.888785+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.888785+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.891179+0000 mon.a (mon.0) 255 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.891179+0000 mon.a (mon.0) 255 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895515+0000 mon.a (mon.0) 256 : cluster [DBG] monmap epoch 3 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895515+0000 mon.a (mon.0) 256 : cluster [DBG] monmap epoch 3 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895521+0000 mon.a (mon.0) 257 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895521+0000 mon.a (mon.0) 257 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895524+0000 mon.a (mon.0) 258 : cluster [DBG] last_changed 2026-03-08T23:16:59.868626+0000 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895524+0000 mon.a (mon.0) 258 : cluster [DBG] last_changed 2026-03-08T23:16:59.868626+0000 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895575+0000 mon.a (mon.0) 259 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895575+0000 mon.a (mon.0) 259 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895589+0000 mon.a (mon.0) 260 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895589+0000 mon.a (mon.0) 260 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895592+0000 mon.a (mon.0) 261 : cluster [DBG] election_strategy: 1 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895592+0000 mon.a (mon.0) 261 : cluster [DBG] election_strategy: 1 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895594+0000 mon.a (mon.0) 262 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895594+0000 mon.a (mon.0) 262 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895597+0000 mon.a (mon.0) 263 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895597+0000 mon.a (mon.0) 263 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895599+0000 mon.a (mon.0) 264 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895599+0000 mon.a (mon.0) 264 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895840+0000 mon.a (mon.0) 265 : cluster [DBG] fsmap 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895840+0000 mon.a (mon.0) 265 : cluster [DBG] fsmap 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895852+0000 mon.a (mon.0) 266 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895852+0000 mon.a (mon.0) 266 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895959+0000 mon.a (mon.0) 267 : cluster [DBG] mgrmap e13: x(active, since 52s) 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.895959+0000 mon.a (mon.0) 267 : cluster [DBG] mgrmap e13: x(active, since 52s) 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.896088+0000 mon.a (mon.0) 268 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,c) 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.896088+0000 mon.a (mon.0) 268 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,c) 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.896097+0000 mon.a (mon.0) 269 : cluster [INF] Cluster is now healthy 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.896097+0000 mon.a (mon.0) 269 : cluster [INF] Cluster is now healthy 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.899096+0000 mon.a (mon.0) 270 : cluster [INF] overall HEALTH_OK 2026-03-08T23:17:07.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:07 vm04 bash[19918]: cluster 2026-03-08T23:17:06.899096+0000 mon.a (mon.0) 270 : cluster [INF] overall HEALTH_OK 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:01.870000+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:01.870000+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.179040+0000 mgr.x (mgr.14150) 63 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.179040+0000 mgr.x (mgr.14150) 63 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.887691+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.887691+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.888695+0000 mon.a (mon.0) 254 : cluster [INF] mon.a calling monitor election 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.888695+0000 mon.a (mon.0) 254 : cluster [INF] mon.a calling monitor election 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.888785+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.888785+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.891179+0000 mon.a (mon.0) 255 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.891179+0000 mon.a (mon.0) 255 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895515+0000 mon.a (mon.0) 256 : cluster [DBG] monmap epoch 3 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895515+0000 mon.a (mon.0) 256 : cluster [DBG] monmap epoch 3 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895521+0000 mon.a (mon.0) 257 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895521+0000 mon.a (mon.0) 257 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895524+0000 mon.a (mon.0) 258 : cluster [DBG] last_changed 2026-03-08T23:16:59.868626+0000 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895524+0000 mon.a (mon.0) 258 : cluster [DBG] last_changed 2026-03-08T23:16:59.868626+0000 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895575+0000 mon.a (mon.0) 259 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895575+0000 mon.a (mon.0) 259 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895589+0000 mon.a (mon.0) 260 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895589+0000 mon.a (mon.0) 260 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895592+0000 mon.a (mon.0) 261 : cluster [DBG] election_strategy: 1 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895592+0000 mon.a (mon.0) 261 : cluster [DBG] election_strategy: 1 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895594+0000 mon.a (mon.0) 262 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895594+0000 mon.a (mon.0) 262 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895597+0000 mon.a (mon.0) 263 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895597+0000 mon.a (mon.0) 263 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895599+0000 mon.a (mon.0) 264 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895599+0000 mon.a (mon.0) 264 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895840+0000 mon.a (mon.0) 265 : cluster [DBG] fsmap 2026-03-08T23:17:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895840+0000 mon.a (mon.0) 265 : cluster [DBG] fsmap 2026-03-08T23:17:07.645 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895852+0000 mon.a (mon.0) 266 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:17:07.645 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895852+0000 mon.a (mon.0) 266 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:17:07.645 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895959+0000 mon.a (mon.0) 267 : cluster [DBG] mgrmap e13: x(active, since 52s) 2026-03-08T23:17:07.645 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.895959+0000 mon.a (mon.0) 267 : cluster [DBG] mgrmap e13: x(active, since 52s) 2026-03-08T23:17:07.645 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.896088+0000 mon.a (mon.0) 268 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,c) 2026-03-08T23:17:07.645 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.896088+0000 mon.a (mon.0) 268 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,c) 2026-03-08T23:17:07.645 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.896097+0000 mon.a (mon.0) 269 : cluster [INF] Cluster is now healthy 2026-03-08T23:17:07.645 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.896097+0000 mon.a (mon.0) 269 : cluster [INF] Cluster is now healthy 2026-03-08T23:17:07.645 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.899096+0000 mon.a (mon.0) 270 : cluster [INF] overall HEALTH_OK 2026-03-08T23:17:07.645 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:07 vm02 bash[17457]: cluster 2026-03-08T23:17:06.899096+0000 mon.a (mon.0) 270 : cluster [INF] overall HEALTH_OK 2026-03-08T23:17:07.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:01.870000+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-08T23:17:07.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:01.870000+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-08T23:17:07.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.179040+0000 mgr.x (mgr.14150) 63 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.179040+0000 mgr.x (mgr.14150) 63 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:07.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.887691+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-08T23:17:07.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.887691+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-08T23:17:07.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.888695+0000 mon.a (mon.0) 254 : cluster [INF] mon.a calling monitor election 2026-03-08T23:17:07.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.888695+0000 mon.a (mon.0) 254 : cluster [INF] mon.a calling monitor election 2026-03-08T23:17:07.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.888785+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-08T23:17:07.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.888785+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.891179+0000 mon.a (mon.0) 255 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.891179+0000 mon.a (mon.0) 255 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895515+0000 mon.a (mon.0) 256 : cluster [DBG] monmap epoch 3 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895515+0000 mon.a (mon.0) 256 : cluster [DBG] monmap epoch 3 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895521+0000 mon.a (mon.0) 257 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895521+0000 mon.a (mon.0) 257 : cluster [DBG] fsid 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895524+0000 mon.a (mon.0) 258 : cluster [DBG] last_changed 2026-03-08T23:16:59.868626+0000 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895524+0000 mon.a (mon.0) 258 : cluster [DBG] last_changed 2026-03-08T23:16:59.868626+0000 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895575+0000 mon.a (mon.0) 259 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895575+0000 mon.a (mon.0) 259 : cluster [DBG] created 2026-03-08T23:15:51.971315+0000 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895589+0000 mon.a (mon.0) 260 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895589+0000 mon.a (mon.0) 260 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895592+0000 mon.a (mon.0) 261 : cluster [DBG] election_strategy: 1 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895592+0000 mon.a (mon.0) 261 : cluster [DBG] election_strategy: 1 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895594+0000 mon.a (mon.0) 262 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895594+0000 mon.a (mon.0) 262 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895597+0000 mon.a (mon.0) 263 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895597+0000 mon.a (mon.0) 263 : cluster [DBG] 1: [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] mon.c 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895599+0000 mon.a (mon.0) 264 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895599+0000 mon.a (mon.0) 264 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895840+0000 mon.a (mon.0) 265 : cluster [DBG] fsmap 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895840+0000 mon.a (mon.0) 265 : cluster [DBG] fsmap 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895852+0000 mon.a (mon.0) 266 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895852+0000 mon.a (mon.0) 266 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895959+0000 mon.a (mon.0) 267 : cluster [DBG] mgrmap e13: x(active, since 52s) 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.895959+0000 mon.a (mon.0) 267 : cluster [DBG] mgrmap e13: x(active, since 52s) 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.896088+0000 mon.a (mon.0) 268 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,c) 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.896088+0000 mon.a (mon.0) 268 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum a,c) 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.896097+0000 mon.a (mon.0) 269 : cluster [INF] Cluster is now healthy 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.896097+0000 mon.a (mon.0) 269 : cluster [INF] Cluster is now healthy 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.899096+0000 mon.a (mon.0) 270 : cluster [INF] overall HEALTH_OK 2026-03-08T23:17:07.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:07 vm10 bash[20034]: cluster 2026-03-08T23:17:06.899096+0000 mon.a (mon.0) 270 : cluster [INF] overall HEALTH_OK 2026-03-08T23:17:08.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:08 vm04 bash[19918]: audit 2026-03-08T23:17:07.868930+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:08.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:08 vm04 bash[19918]: audit 2026-03-08T23:17:07.868930+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:08.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:08 vm02 bash[17457]: audit 2026-03-08T23:17:07.868930+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:08.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:08 vm02 bash[17457]: audit 2026-03-08T23:17:07.868930+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:08.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:08 vm10 bash[20034]: audit 2026-03-08T23:17:07.868930+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:08.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:08 vm10 bash[20034]: audit 2026-03-08T23:17:07.868930+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:17:08.872 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:17:09.118 INFO:teuthology.orchestra.run.vm02.stdout:# minimal ceph.conf for 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:17:09.118 INFO:teuthology.orchestra.run.vm02.stdout:[global] 2026-03-08T23:17:09.118 INFO:teuthology.orchestra.run.vm02.stdout: fsid = 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:17:09.118 INFO:teuthology.orchestra.run.vm02.stdout: mon_host = [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] [v2:192.168.123.110:3300/0,v1:192.168.123.110:6789/0] 2026-03-08T23:17:09.128 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:17:08 vm02 bash[17721]: debug 2026-03-08T23:17:08.863+0000 7f2040bbb640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-08T23:17:09.173 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-08T23:17:09.173 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-08T23:17:09.173 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.conf 2026-03-08T23:17:09.221 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-08T23:17:09.221 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:17:09.273 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-08T23:17:09.273 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.conf 2026-03-08T23:17:09.282 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-08T23:17:09.282 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:17:09.332 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-08T23:17:09.332 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/ceph/ceph.conf 2026-03-08T23:17:09.339 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-08T23:17:09.339 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:17:09.388 INFO:tasks.cephadm:Adding mgr.x on vm02 2026-03-08T23:17:09.388 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph orch apply mgr '1;vm02=x' 2026-03-08T23:17:09.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:09 vm02 bash[17457]: cluster 2026-03-08T23:17:08.179197+0000 mgr.x (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:09.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:09 vm02 bash[17457]: cluster 2026-03-08T23:17:08.179197+0000 mgr.x (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:09.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:09 vm02 bash[17457]: audit 2026-03-08T23:17:09.118411+0000 mon.a (mon.0) 272 : audit [DBG] from='client.? 192.168.123.102:0/3690409545' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:09.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:09 vm02 bash[17457]: audit 2026-03-08T23:17:09.118411+0000 mon.a (mon.0) 272 : audit [DBG] from='client.? 192.168.123.102:0/3690409545' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:09.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:09 vm04 bash[19918]: cluster 2026-03-08T23:17:08.179197+0000 mgr.x (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:09.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:09 vm04 bash[19918]: cluster 2026-03-08T23:17:08.179197+0000 mgr.x (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:09.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:09 vm04 bash[19918]: audit 2026-03-08T23:17:09.118411+0000 mon.a (mon.0) 272 : audit [DBG] from='client.? 192.168.123.102:0/3690409545' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:09.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:09 vm04 bash[19918]: audit 2026-03-08T23:17:09.118411+0000 mon.a (mon.0) 272 : audit [DBG] from='client.? 192.168.123.102:0/3690409545' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:09.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:09 vm10 bash[20034]: cluster 2026-03-08T23:17:08.179197+0000 mgr.x (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:09.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:09 vm10 bash[20034]: cluster 2026-03-08T23:17:08.179197+0000 mgr.x (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:09.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:09 vm10 bash[20034]: audit 2026-03-08T23:17:09.118411+0000 mon.a (mon.0) 272 : audit [DBG] from='client.? 192.168.123.102:0/3690409545' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:09.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:09 vm10 bash[20034]: audit 2026-03-08T23:17:09.118411+0000 mon.a (mon.0) 272 : audit [DBG] from='client.? 192.168.123.102:0/3690409545' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:11.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:11 vm04 bash[19918]: cluster 2026-03-08T23:17:10.179353+0000 mgr.x (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:11.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:11 vm04 bash[19918]: cluster 2026-03-08T23:17:10.179353+0000 mgr.x (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:11.893 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:11 vm02 bash[17457]: cluster 2026-03-08T23:17:10.179353+0000 mgr.x (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:11.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:11 vm02 bash[17457]: cluster 2026-03-08T23:17:10.179353+0000 mgr.x (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:11.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:11 vm10 bash[20034]: cluster 2026-03-08T23:17:10.179353+0000 mgr.x (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:11.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:11 vm10 bash[20034]: cluster 2026-03-08T23:17:10.179353+0000 mgr.x (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:13.036 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.c/config 2026-03-08T23:17:13.314 INFO:teuthology.orchestra.run.vm10.stdout:Scheduled mgr update... 2026-03-08T23:17:13.365 INFO:tasks.cephadm:Deploying OSDs... 2026-03-08T23:17:13.365 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-08T23:17:13.365 DEBUG:teuthology.orchestra.run.vm02:> dd if=/scratch_devs of=/dev/stdout 2026-03-08T23:17:13.368 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:17:13.368 DEBUG:teuthology.orchestra.run.vm02:> ls /dev/[sv]d? 2026-03-08T23:17:13.412 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vda 2026-03-08T23:17:13.412 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdb 2026-03-08T23:17:13.412 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdc 2026-03-08T23:17:13.412 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdd 2026-03-08T23:17:13.412 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vde 2026-03-08T23:17:13.413 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-08T23:17:13.413 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-08T23:17:13.413 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdb 2026-03-08T23:17:13.459 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdb 2026-03-08T23:17:13.459 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T23:17:13.459 INFO:teuthology.orchestra.run.vm02.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-08T23:17:13.459 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T23:17:13.459 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-08 23:09:43.851224182 +0000 2026-03-08T23:17:13.459 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-08 23:09:42.763224182 +0000 2026-03-08T23:17:13.459 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-08 23:09:42.763224182 +0000 2026-03-08T23:17:13.459 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-08T23:17:13.459 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-08T23:17:13.509 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-08T23:17:13.509 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-08T23:17:13.509 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000196407 s, 2.6 MB/s 2026-03-08T23:17:13.509 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-08T23:17:13.560 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdc 2026-03-08T23:17:13.608 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdc 2026-03-08T23:17:13.609 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T23:17:13.609 INFO:teuthology.orchestra.run.vm02.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-08T23:17:13.609 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T23:17:13.609 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-08 23:09:43.859224182 +0000 2026-03-08T23:17:13.609 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-08 23:09:42.771224182 +0000 2026-03-08T23:17:13.609 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-08 23:09:42.771224182 +0000 2026-03-08T23:17:13.609 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-08T23:17:13.609 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-08T23:17:13.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: cluster 2026-03-08T23:17:12.179521+0000 mgr.x (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:13.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: cluster 2026-03-08T23:17:12.179521+0000 mgr.x (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.314367+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.314367+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.315031+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.315031+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.315947+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.315947+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.316336+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.316336+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.319737+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.319737+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.322759+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.322759+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.336962+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.336962+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.337461+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.337461+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.337982+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:13.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:13 vm10 bash[20034]: audit 2026-03-08T23:17:13.337982+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:13.662 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-08T23:17:13.663 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-08T23:17:13.663 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000616022 s, 831 kB/s 2026-03-08T23:17:13.663 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-08T23:17:13.710 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdd 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: cluster 2026-03-08T23:17:12.179521+0000 mgr.x (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: cluster 2026-03-08T23:17:12.179521+0000 mgr.x (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.314367+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.314367+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.315031+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.315031+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.315947+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.315947+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.316336+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.316336+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.319737+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.319737+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.322759+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.322759+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.336962+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.336962+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.337461+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.337461+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.337982+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:13.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:13 vm02 bash[17457]: audit 2026-03-08T23:17:13.337982+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:13.724 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdd 2026-03-08T23:17:13.724 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T23:17:13.724 INFO:teuthology.orchestra.run.vm02.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-08T23:17:13.724 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T23:17:13.724 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-08 23:09:43.851224182 +0000 2026-03-08T23:17:13.724 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-08 23:09:42.775224182 +0000 2026-03-08T23:17:13.724 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-08 23:09:42.775224182 +0000 2026-03-08T23:17:13.724 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-08T23:17:13.724 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-08T23:17:13.772 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-08T23:17:13.772 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-08T23:17:13.772 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000110587 s, 4.6 MB/s 2026-03-08T23:17:13.773 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-08T23:17:13.819 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vde 2026-03-08T23:17:13.864 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vde 2026-03-08T23:17:13.864 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T23:17:13.864 INFO:teuthology.orchestra.run.vm02.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-08T23:17:13.864 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T23:17:13.865 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-08 23:09:43.859224182 +0000 2026-03-08T23:17:13.865 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-08 23:09:42.767224182 +0000 2026-03-08T23:17:13.865 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-08 23:09:42.767224182 +0000 2026-03-08T23:17:13.865 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-08T23:17:13.865 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: cluster 2026-03-08T23:17:12.179521+0000 mgr.x (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: cluster 2026-03-08T23:17:12.179521+0000 mgr.x (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.314367+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.314367+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.315031+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.315031+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.315947+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.315947+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.316336+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.316336+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.319737+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.319737+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.322759+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.322759+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.336962+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.336962+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.337461+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.337461+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.337982+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:13 vm04 bash[19918]: audit 2026-03-08T23:17:13.337982+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:13.912 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-08T23:17:13.912 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-08T23:17:13.912 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000124452 s, 4.1 MB/s 2026-03-08T23:17:13.913 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-08T23:17:13.958 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-08T23:17:13.958 DEBUG:teuthology.orchestra.run.vm04:> dd if=/scratch_devs of=/dev/stdout 2026-03-08T23:17:13.961 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:17:13.961 DEBUG:teuthology.orchestra.run.vm04:> ls /dev/[sv]d? 2026-03-08T23:17:14.007 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vda 2026-03-08T23:17:14.007 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdb 2026-03-08T23:17:14.007 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdc 2026-03-08T23:17:14.007 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdd 2026-03-08T23:17:14.007 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vde 2026-03-08T23:17:14.007 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-08T23:17:14.007 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-08T23:17:14.007 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdb 2026-03-08T23:17:14.052 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdb 2026-03-08T23:17:14.052 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T23:17:14.052 INFO:teuthology.orchestra.run.vm04.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-08T23:17:14.052 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T23:17:14.052 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-08 23:10:33.761968073 +0000 2026-03-08T23:17:14.052 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-08 23:10:32.841968073 +0000 2026-03-08T23:17:14.052 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-08 23:10:32.841968073 +0000 2026-03-08T23:17:14.052 INFO:teuthology.orchestra.run.vm04.stdout: Birth: - 2026-03-08T23:17:14.052 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-08T23:17:14.100 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-08T23:17:14.101 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-08T23:17:14.101 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000200164 s, 2.6 MB/s 2026-03-08T23:17:14.101 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-08T23:17:14.149 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdc 2026-03-08T23:17:14.196 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdc 2026-03-08T23:17:14.196 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T23:17:14.196 INFO:teuthology.orchestra.run.vm04.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-08T23:17:14.196 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T23:17:14.196 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-08 23:10:33.765968073 +0000 2026-03-08T23:17:14.196 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-08 23:10:32.849968073 +0000 2026-03-08T23:17:14.196 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-08 23:10:32.849968073 +0000 2026-03-08T23:17:14.196 INFO:teuthology.orchestra.run.vm04.stdout: Birth: - 2026-03-08T23:17:14.196 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-08T23:17:14.244 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-08T23:17:14.244 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-08T23:17:14.244 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000172282 s, 3.0 MB/s 2026-03-08T23:17:14.244 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-08T23:17:14.293 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdd 2026-03-08T23:17:14.341 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdd 2026-03-08T23:17:14.341 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T23:17:14.341 INFO:teuthology.orchestra.run.vm04.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-08T23:17:14.341 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T23:17:14.341 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-08 23:10:33.761968073 +0000 2026-03-08T23:17:14.341 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-08 23:10:32.841968073 +0000 2026-03-08T23:17:14.341 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-08 23:10:32.841968073 +0000 2026-03-08T23:17:14.341 INFO:teuthology.orchestra.run.vm04.stdout: Birth: - 2026-03-08T23:17:14.341 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-08T23:17:14.392 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-08T23:17:14.392 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-08T23:17:14.392 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000188553 s, 2.7 MB/s 2026-03-08T23:17:14.392 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-08T23:17:14.437 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vde 2026-03-08T23:17:14.484 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vde 2026-03-08T23:17:14.484 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T23:17:14.484 INFO:teuthology.orchestra.run.vm04.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-08T23:17:14.484 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T23:17:14.484 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-08 23:10:33.765968073 +0000 2026-03-08T23:17:14.484 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-08 23:10:32.849968073 +0000 2026-03-08T23:17:14.484 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-08 23:10:32.849968073 +0000 2026-03-08T23:17:14.484 INFO:teuthology.orchestra.run.vm04.stdout: Birth: - 2026-03-08T23:17:14.484 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-08T23:17:14.532 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-08T23:17:14.532 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-08T23:17:14.532 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000158346 s, 3.2 MB/s 2026-03-08T23:17:14.533 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-08T23:17:14.581 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-08T23:17:14.581 DEBUG:teuthology.orchestra.run.vm10:> dd if=/scratch_devs of=/dev/stdout 2026-03-08T23:17:14.585 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:17:14.585 DEBUG:teuthology.orchestra.run.vm10:> ls /dev/[sv]d? 2026-03-08T23:17:14.628 INFO:teuthology.orchestra.run.vm10.stdout:/dev/vda 2026-03-08T23:17:14.628 INFO:teuthology.orchestra.run.vm10.stdout:/dev/vdb 2026-03-08T23:17:14.628 INFO:teuthology.orchestra.run.vm10.stdout:/dev/vdc 2026-03-08T23:17:14.628 INFO:teuthology.orchestra.run.vm10.stdout:/dev/vdd 2026-03-08T23:17:14.628 INFO:teuthology.orchestra.run.vm10.stdout:/dev/vde 2026-03-08T23:17:14.628 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-08T23:17:14.628 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-08T23:17:14.628 DEBUG:teuthology.orchestra.run.vm10:> stat /dev/vdb 2026-03-08T23:17:14.672 INFO:teuthology.orchestra.run.vm10.stdout: File: /dev/vdb 2026-03-08T23:17:14.672 INFO:teuthology.orchestra.run.vm10.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T23:17:14.672 INFO:teuthology.orchestra.run.vm10.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-08T23:17:14.672 INFO:teuthology.orchestra.run.vm10.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T23:17:14.672 INFO:teuthology.orchestra.run.vm10.stdout:Access: 2026-03-08 23:10:08.597089263 +0000 2026-03-08T23:17:14.672 INFO:teuthology.orchestra.run.vm10.stdout:Modify: 2026-03-08 23:10:07.741089263 +0000 2026-03-08T23:17:14.672 INFO:teuthology.orchestra.run.vm10.stdout:Change: 2026-03-08 23:10:07.741089263 +0000 2026-03-08T23:17:14.672 INFO:teuthology.orchestra.run.vm10.stdout: Birth: - 2026-03-08T23:17:14.672 DEBUG:teuthology.orchestra.run.vm10:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-08T23:17:14.720 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records in 2026-03-08T23:17:14.720 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records out 2026-03-08T23:17:14.720 INFO:teuthology.orchestra.run.vm10.stderr:512 bytes copied, 0.000192169 s, 2.7 MB/s 2026-03-08T23:17:14.721 DEBUG:teuthology.orchestra.run.vm10:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-08T23:17:14.765 DEBUG:teuthology.orchestra.run.vm10:> stat /dev/vdc 2026-03-08T23:17:14.808 INFO:teuthology.orchestra.run.vm10.stdout: File: /dev/vdc 2026-03-08T23:17:14.808 INFO:teuthology.orchestra.run.vm10.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T23:17:14.808 INFO:teuthology.orchestra.run.vm10.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-08T23:17:14.808 INFO:teuthology.orchestra.run.vm10.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T23:17:14.808 INFO:teuthology.orchestra.run.vm10.stdout:Access: 2026-03-08 23:10:08.605089263 +0000 2026-03-08T23:17:14.808 INFO:teuthology.orchestra.run.vm10.stdout:Modify: 2026-03-08 23:10:07.741089263 +0000 2026-03-08T23:17:14.808 INFO:teuthology.orchestra.run.vm10.stdout:Change: 2026-03-08 23:10:07.741089263 +0000 2026-03-08T23:17:14.808 INFO:teuthology.orchestra.run.vm10.stdout: Birth: - 2026-03-08T23:17:14.808 DEBUG:teuthology.orchestra.run.vm10:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-08T23:17:14.856 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records in 2026-03-08T23:17:14.856 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records out 2026-03-08T23:17:14.856 INFO:teuthology.orchestra.run.vm10.stderr:512 bytes copied, 0.000190808 s, 2.7 MB/s 2026-03-08T23:17:14.857 DEBUG:teuthology.orchestra.run.vm10:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-08T23:17:14.901 DEBUG:teuthology.orchestra.run.vm10:> stat /dev/vdd 2026-03-08T23:17:14.944 INFO:teuthology.orchestra.run.vm10.stdout: File: /dev/vdd 2026-03-08T23:17:14.944 INFO:teuthology.orchestra.run.vm10.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T23:17:14.944 INFO:teuthology.orchestra.run.vm10.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-08T23:17:14.944 INFO:teuthology.orchestra.run.vm10.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T23:17:14.944 INFO:teuthology.orchestra.run.vm10.stdout:Access: 2026-03-08 23:10:08.597089263 +0000 2026-03-08T23:17:14.944 INFO:teuthology.orchestra.run.vm10.stdout:Modify: 2026-03-08 23:10:07.741089263 +0000 2026-03-08T23:17:14.944 INFO:teuthology.orchestra.run.vm10.stdout:Change: 2026-03-08 23:10:07.741089263 +0000 2026-03-08T23:17:14.944 INFO:teuthology.orchestra.run.vm10.stdout: Birth: - 2026-03-08T23:17:14.944 DEBUG:teuthology.orchestra.run.vm10:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-08T23:17:14.991 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:14 vm10 bash[20034]: audit 2026-03-08T23:17:13.309906+0000 mgr.x (mgr.14150) 67 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm02=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:17:14.991 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:14 vm10 bash[20034]: audit 2026-03-08T23:17:13.309906+0000 mgr.x (mgr.14150) 67 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm02=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:17:14.991 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:14 vm10 bash[20034]: cephadm 2026-03-08T23:17:13.310655+0000 mgr.x (mgr.14150) 68 : cephadm [INF] Saving service mgr spec with placement vm02=x;count:1 2026-03-08T23:17:14.991 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:14 vm10 bash[20034]: cephadm 2026-03-08T23:17:13.310655+0000 mgr.x (mgr.14150) 68 : cephadm [INF] Saving service mgr spec with placement vm02=x;count:1 2026-03-08T23:17:14.991 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:14 vm10 bash[20034]: cephadm 2026-03-08T23:17:13.336746+0000 mgr.x (mgr.14150) 69 : cephadm [INF] Reconfiguring mgr.x (unknown last config time)... 2026-03-08T23:17:14.991 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:14 vm10 bash[20034]: cephadm 2026-03-08T23:17:13.336746+0000 mgr.x (mgr.14150) 69 : cephadm [INF] Reconfiguring mgr.x (unknown last config time)... 2026-03-08T23:17:14.991 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:14 vm10 bash[20034]: cephadm 2026-03-08T23:17:13.338409+0000 mgr.x (mgr.14150) 70 : cephadm [INF] Reconfiguring daemon mgr.x on vm02 2026-03-08T23:17:14.991 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:14 vm10 bash[20034]: cephadm 2026-03-08T23:17:13.338409+0000 mgr.x (mgr.14150) 70 : cephadm [INF] Reconfiguring daemon mgr.x on vm02 2026-03-08T23:17:14.991 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:14 vm10 bash[20034]: audit 2026-03-08T23:17:13.839355+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:14.991 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:14 vm10 bash[20034]: audit 2026-03-08T23:17:13.839355+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:14.991 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:14 vm10 bash[20034]: audit 2026-03-08T23:17:13.847410+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:14.991 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:14 vm10 bash[20034]: audit 2026-03-08T23:17:13.847410+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:14.992 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records in 2026-03-08T23:17:14.992 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records out 2026-03-08T23:17:14.992 INFO:teuthology.orchestra.run.vm10.stderr:512 bytes copied, 0.000173515 s, 3.0 MB/s 2026-03-08T23:17:14.993 DEBUG:teuthology.orchestra.run.vm10:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-08T23:17:15.037 DEBUG:teuthology.orchestra.run.vm10:> stat /dev/vde 2026-03-08T23:17:15.080 INFO:teuthology.orchestra.run.vm10.stdout: File: /dev/vde 2026-03-08T23:17:15.080 INFO:teuthology.orchestra.run.vm10.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T23:17:15.080 INFO:teuthology.orchestra.run.vm10.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-08T23:17:15.080 INFO:teuthology.orchestra.run.vm10.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T23:17:15.080 INFO:teuthology.orchestra.run.vm10.stdout:Access: 2026-03-08 23:10:08.605089263 +0000 2026-03-08T23:17:15.080 INFO:teuthology.orchestra.run.vm10.stdout:Modify: 2026-03-08 23:10:07.733089263 +0000 2026-03-08T23:17:15.080 INFO:teuthology.orchestra.run.vm10.stdout:Change: 2026-03-08 23:10:07.733089263 +0000 2026-03-08T23:17:15.080 INFO:teuthology.orchestra.run.vm10.stdout: Birth: - 2026-03-08T23:17:15.080 DEBUG:teuthology.orchestra.run.vm10:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-08T23:17:15.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:14 vm04 bash[19918]: audit 2026-03-08T23:17:13.309906+0000 mgr.x (mgr.14150) 67 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm02=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:17:15.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:14 vm04 bash[19918]: audit 2026-03-08T23:17:13.309906+0000 mgr.x (mgr.14150) 67 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm02=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:17:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:14 vm04 bash[19918]: cephadm 2026-03-08T23:17:13.310655+0000 mgr.x (mgr.14150) 68 : cephadm [INF] Saving service mgr spec with placement vm02=x;count:1 2026-03-08T23:17:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:14 vm04 bash[19918]: cephadm 2026-03-08T23:17:13.310655+0000 mgr.x (mgr.14150) 68 : cephadm [INF] Saving service mgr spec with placement vm02=x;count:1 2026-03-08T23:17:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:14 vm04 bash[19918]: cephadm 2026-03-08T23:17:13.336746+0000 mgr.x (mgr.14150) 69 : cephadm [INF] Reconfiguring mgr.x (unknown last config time)... 2026-03-08T23:17:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:14 vm04 bash[19918]: cephadm 2026-03-08T23:17:13.336746+0000 mgr.x (mgr.14150) 69 : cephadm [INF] Reconfiguring mgr.x (unknown last config time)... 2026-03-08T23:17:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:14 vm04 bash[19918]: cephadm 2026-03-08T23:17:13.338409+0000 mgr.x (mgr.14150) 70 : cephadm [INF] Reconfiguring daemon mgr.x on vm02 2026-03-08T23:17:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:14 vm04 bash[19918]: cephadm 2026-03-08T23:17:13.338409+0000 mgr.x (mgr.14150) 70 : cephadm [INF] Reconfiguring daemon mgr.x on vm02 2026-03-08T23:17:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:14 vm04 bash[19918]: audit 2026-03-08T23:17:13.839355+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:14 vm04 bash[19918]: audit 2026-03-08T23:17:13.839355+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:14 vm04 bash[19918]: audit 2026-03-08T23:17:13.847410+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:14 vm04 bash[19918]: audit 2026-03-08T23:17:13.847410+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:15.128 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records in 2026-03-08T23:17:15.128 INFO:teuthology.orchestra.run.vm10.stderr:1+0 records out 2026-03-08T23:17:15.128 INFO:teuthology.orchestra.run.vm10.stderr:512 bytes copied, 0.00022453 s, 2.3 MB/s 2026-03-08T23:17:15.129 DEBUG:teuthology.orchestra.run.vm10:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-08T23:17:15.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:14 vm02 bash[17457]: audit 2026-03-08T23:17:13.309906+0000 mgr.x (mgr.14150) 67 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm02=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:17:15.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:14 vm02 bash[17457]: audit 2026-03-08T23:17:13.309906+0000 mgr.x (mgr.14150) 67 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "1;vm02=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:17:15.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:14 vm02 bash[17457]: cephadm 2026-03-08T23:17:13.310655+0000 mgr.x (mgr.14150) 68 : cephadm [INF] Saving service mgr spec with placement vm02=x;count:1 2026-03-08T23:17:15.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:14 vm02 bash[17457]: cephadm 2026-03-08T23:17:13.310655+0000 mgr.x (mgr.14150) 68 : cephadm [INF] Saving service mgr spec with placement vm02=x;count:1 2026-03-08T23:17:15.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:14 vm02 bash[17457]: cephadm 2026-03-08T23:17:13.336746+0000 mgr.x (mgr.14150) 69 : cephadm [INF] Reconfiguring mgr.x (unknown last config time)... 2026-03-08T23:17:15.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:14 vm02 bash[17457]: cephadm 2026-03-08T23:17:13.336746+0000 mgr.x (mgr.14150) 69 : cephadm [INF] Reconfiguring mgr.x (unknown last config time)... 2026-03-08T23:17:15.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:14 vm02 bash[17457]: cephadm 2026-03-08T23:17:13.338409+0000 mgr.x (mgr.14150) 70 : cephadm [INF] Reconfiguring daemon mgr.x on vm02 2026-03-08T23:17:15.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:14 vm02 bash[17457]: cephadm 2026-03-08T23:17:13.338409+0000 mgr.x (mgr.14150) 70 : cephadm [INF] Reconfiguring daemon mgr.x on vm02 2026-03-08T23:17:15.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:14 vm02 bash[17457]: audit 2026-03-08T23:17:13.839355+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:15.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:14 vm02 bash[17457]: audit 2026-03-08T23:17:13.839355+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:15.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:14 vm02 bash[17457]: audit 2026-03-08T23:17:13.847410+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:15.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:14 vm02 bash[17457]: audit 2026-03-08T23:17:13.847410+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:15.174 INFO:tasks.cephadm:Deploying osd.0 on vm02 with /dev/vde... 2026-03-08T23:17:15.174 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- lvm zap /dev/vde 2026-03-08T23:17:16.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:15 vm04 bash[19918]: cluster 2026-03-08T23:17:14.179690+0000 mgr.x (mgr.14150) 71 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:16.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:15 vm04 bash[19918]: cluster 2026-03-08T23:17:14.179690+0000 mgr.x (mgr.14150) 71 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:15 vm02 bash[17457]: cluster 2026-03-08T23:17:14.179690+0000 mgr.x (mgr.14150) 71 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:15 vm02 bash[17457]: cluster 2026-03-08T23:17:14.179690+0000 mgr.x (mgr.14150) 71 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:15 vm10 bash[20034]: cluster 2026-03-08T23:17:14.179690+0000 mgr.x (mgr.14150) 71 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:15 vm10 bash[20034]: cluster 2026-03-08T23:17:14.179690+0000 mgr.x (mgr.14150) 71 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:17 vm04 bash[19918]: cluster 2026-03-08T23:17:16.179848+0000 mgr.x (mgr.14150) 72 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:17 vm04 bash[19918]: cluster 2026-03-08T23:17:16.179848+0000 mgr.x (mgr.14150) 72 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:18.143 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:17 vm02 bash[17457]: cluster 2026-03-08T23:17:16.179848+0000 mgr.x (mgr.14150) 72 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:17 vm02 bash[17457]: cluster 2026-03-08T23:17:16.179848+0000 mgr.x (mgr.14150) 72 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:17 vm10 bash[20034]: cluster 2026-03-08T23:17:16.179848+0000 mgr.x (mgr.14150) 72 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:17 vm10 bash[20034]: cluster 2026-03-08T23:17:16.179848+0000 mgr.x (mgr.14150) 72 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:19.786 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:17:20.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:19 vm02 bash[17457]: cluster 2026-03-08T23:17:18.180007+0000 mgr.x (mgr.14150) 73 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:20.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:19 vm02 bash[17457]: cluster 2026-03-08T23:17:18.180007+0000 mgr.x (mgr.14150) 73 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:20.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:19 vm10 bash[20034]: cluster 2026-03-08T23:17:18.180007+0000 mgr.x (mgr.14150) 73 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:20.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:19 vm10 bash[20034]: cluster 2026-03-08T23:17:18.180007+0000 mgr.x (mgr.14150) 73 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:20.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:19 vm04 bash[19918]: cluster 2026-03-08T23:17:18.180007+0000 mgr.x (mgr.14150) 73 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:20.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:19 vm04 bash[19918]: cluster 2026-03-08T23:17:18.180007+0000 mgr.x (mgr.14150) 73 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:20.622 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:17:20.639 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph orch daemon add osd vm02:/dev/vde 2026-03-08T23:17:22.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:21 vm10 bash[20034]: cluster 2026-03-08T23:17:20.180248+0000 mgr.x (mgr.14150) 74 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:22.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:21 vm10 bash[20034]: cluster 2026-03-08T23:17:20.180248+0000 mgr.x (mgr.14150) 74 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:22.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:21 vm04 bash[19918]: cluster 2026-03-08T23:17:20.180248+0000 mgr.x (mgr.14150) 74 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:22.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:21 vm04 bash[19918]: cluster 2026-03-08T23:17:20.180248+0000 mgr.x (mgr.14150) 74 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:22.393 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:21 vm02 bash[17457]: cluster 2026-03-08T23:17:20.180248+0000 mgr.x (mgr.14150) 74 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:22.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:21 vm02 bash[17457]: cluster 2026-03-08T23:17:20.180248+0000 mgr.x (mgr.14150) 74 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:24.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:23 vm10 bash[20034]: cluster 2026-03-08T23:17:22.180469+0000 mgr.x (mgr.14150) 75 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:24.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:23 vm10 bash[20034]: cluster 2026-03-08T23:17:22.180469+0000 mgr.x (mgr.14150) 75 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:24.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:23 vm04 bash[19918]: cluster 2026-03-08T23:17:22.180469+0000 mgr.x (mgr.14150) 75 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:24.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:23 vm04 bash[19918]: cluster 2026-03-08T23:17:22.180469+0000 mgr.x (mgr.14150) 75 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:24.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:23 vm02 bash[17457]: cluster 2026-03-08T23:17:22.180469+0000 mgr.x (mgr.14150) 75 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:24.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:23 vm02 bash[17457]: cluster 2026-03-08T23:17:22.180469+0000 mgr.x (mgr.14150) 75 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:25.251 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:17:26.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:25 vm04 bash[19918]: cluster 2026-03-08T23:17:24.180700+0000 mgr.x (mgr.14150) 76 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:26.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:25 vm04 bash[19918]: cluster 2026-03-08T23:17:24.180700+0000 mgr.x (mgr.14150) 76 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:26.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:25 vm04 bash[19918]: audit 2026-03-08T23:17:25.523412+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:17:26.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:25 vm04 bash[19918]: audit 2026-03-08T23:17:25.523412+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:17:26.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:25 vm04 bash[19918]: audit 2026-03-08T23:17:25.525465+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:17:26.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:25 vm04 bash[19918]: audit 2026-03-08T23:17:25.525465+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:17:26.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:25 vm04 bash[19918]: audit 2026-03-08T23:17:25.526443+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:26.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:25 vm04 bash[19918]: audit 2026-03-08T23:17:25.526443+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:26.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:25 vm02 bash[17457]: cluster 2026-03-08T23:17:24.180700+0000 mgr.x (mgr.14150) 76 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:26.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:25 vm02 bash[17457]: cluster 2026-03-08T23:17:24.180700+0000 mgr.x (mgr.14150) 76 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:26.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:25 vm02 bash[17457]: audit 2026-03-08T23:17:25.523412+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:17:26.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:25 vm02 bash[17457]: audit 2026-03-08T23:17:25.523412+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:17:26.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:25 vm02 bash[17457]: audit 2026-03-08T23:17:25.525465+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:17:26.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:25 vm02 bash[17457]: audit 2026-03-08T23:17:25.525465+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:17:26.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:25 vm02 bash[17457]: audit 2026-03-08T23:17:25.526443+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:26.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:25 vm02 bash[17457]: audit 2026-03-08T23:17:25.526443+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:26.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:25 vm10 bash[20034]: cluster 2026-03-08T23:17:24.180700+0000 mgr.x (mgr.14150) 76 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:26.453 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:25 vm10 bash[20034]: cluster 2026-03-08T23:17:24.180700+0000 mgr.x (mgr.14150) 76 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:26.453 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:25 vm10 bash[20034]: audit 2026-03-08T23:17:25.523412+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:17:26.453 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:25 vm10 bash[20034]: audit 2026-03-08T23:17:25.523412+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:17:26.453 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:25 vm10 bash[20034]: audit 2026-03-08T23:17:25.525465+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:17:26.454 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:25 vm10 bash[20034]: audit 2026-03-08T23:17:25.525465+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:17:26.454 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:25 vm10 bash[20034]: audit 2026-03-08T23:17:25.526443+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:26.454 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:25 vm10 bash[20034]: audit 2026-03-08T23:17:25.526443+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:27.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:26 vm04 bash[19918]: audit 2026-03-08T23:17:25.521958+0000 mgr.x (mgr.14150) 77 : audit [DBG] from='client.14208 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:17:27.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:26 vm04 bash[19918]: audit 2026-03-08T23:17:25.521958+0000 mgr.x (mgr.14150) 77 : audit [DBG] from='client.14208 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:17:27.393 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:26 vm02 bash[17457]: audit 2026-03-08T23:17:25.521958+0000 mgr.x (mgr.14150) 77 : audit [DBG] from='client.14208 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:17:27.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:26 vm02 bash[17457]: audit 2026-03-08T23:17:25.521958+0000 mgr.x (mgr.14150) 77 : audit [DBG] from='client.14208 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:17:27.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:26 vm10 bash[20034]: audit 2026-03-08T23:17:25.521958+0000 mgr.x (mgr.14150) 77 : audit [DBG] from='client.14208 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:17:27.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:26 vm10 bash[20034]: audit 2026-03-08T23:17:25.521958+0000 mgr.x (mgr.14150) 77 : audit [DBG] from='client.14208 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:17:28.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:27 vm04 bash[19918]: cluster 2026-03-08T23:17:26.180956+0000 mgr.x (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:28.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:27 vm04 bash[19918]: cluster 2026-03-08T23:17:26.180956+0000 mgr.x (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:28.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:27 vm02 bash[17457]: cluster 2026-03-08T23:17:26.180956+0000 mgr.x (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:28.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:27 vm02 bash[17457]: cluster 2026-03-08T23:17:26.180956+0000 mgr.x (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:28.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:27 vm10 bash[20034]: cluster 2026-03-08T23:17:26.180956+0000 mgr.x (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:28.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:27 vm10 bash[20034]: cluster 2026-03-08T23:17:26.180956+0000 mgr.x (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:30.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:29 vm04 bash[19918]: cluster 2026-03-08T23:17:28.181170+0000 mgr.x (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:30.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:29 vm04 bash[19918]: cluster 2026-03-08T23:17:28.181170+0000 mgr.x (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:30.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:29 vm02 bash[17457]: cluster 2026-03-08T23:17:28.181170+0000 mgr.x (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:30.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:29 vm02 bash[17457]: cluster 2026-03-08T23:17:28.181170+0000 mgr.x (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:30.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:29 vm10 bash[20034]: cluster 2026-03-08T23:17:28.181170+0000 mgr.x (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:30.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:29 vm10 bash[20034]: cluster 2026-03-08T23:17:28.181170+0000 mgr.x (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:32.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:31 vm04 bash[19918]: cluster 2026-03-08T23:17:30.181375+0000 mgr.x (mgr.14150) 80 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:32.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:31 vm04 bash[19918]: cluster 2026-03-08T23:17:30.181375+0000 mgr.x (mgr.14150) 80 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:32.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:31 vm04 bash[19918]: audit 2026-03-08T23:17:31.091190+0000 mon.a (mon.0) 287 : audit [INF] from='client.? 192.168.123.102:0/270246007' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4f9f4a95-f093-4c3b-af99-6c3664fdf90d"}]: dispatch 2026-03-08T23:17:32.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:31 vm04 bash[19918]: audit 2026-03-08T23:17:31.091190+0000 mon.a (mon.0) 287 : audit [INF] from='client.? 192.168.123.102:0/270246007' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4f9f4a95-f093-4c3b-af99-6c3664fdf90d"}]: dispatch 2026-03-08T23:17:32.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:31 vm04 bash[19918]: audit 2026-03-08T23:17:31.095801+0000 mon.a (mon.0) 288 : audit [INF] from='client.? 192.168.123.102:0/270246007' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4f9f4a95-f093-4c3b-af99-6c3664fdf90d"}]': finished 2026-03-08T23:17:32.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:31 vm04 bash[19918]: audit 2026-03-08T23:17:31.095801+0000 mon.a (mon.0) 288 : audit [INF] from='client.? 192.168.123.102:0/270246007' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4f9f4a95-f093-4c3b-af99-6c3664fdf90d"}]': finished 2026-03-08T23:17:32.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:31 vm04 bash[19918]: cluster 2026-03-08T23:17:31.098373+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-08T23:17:32.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:31 vm04 bash[19918]: cluster 2026-03-08T23:17:31.098373+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-08T23:17:32.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:31 vm04 bash[19918]: audit 2026-03-08T23:17:31.098576+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:32.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:31 vm04 bash[19918]: audit 2026-03-08T23:17:31.098576+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:32.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:31 vm04 bash[19918]: audit 2026-03-08T23:17:31.757599+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.102:0/188497936' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:17:32.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:31 vm04 bash[19918]: audit 2026-03-08T23:17:31.757599+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.102:0/188497936' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:17:32.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:31 vm02 bash[17457]: cluster 2026-03-08T23:17:30.181375+0000 mgr.x (mgr.14150) 80 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:32.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:31 vm02 bash[17457]: cluster 2026-03-08T23:17:30.181375+0000 mgr.x (mgr.14150) 80 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:32.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:31 vm02 bash[17457]: audit 2026-03-08T23:17:31.091190+0000 mon.a (mon.0) 287 : audit [INF] from='client.? 192.168.123.102:0/270246007' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4f9f4a95-f093-4c3b-af99-6c3664fdf90d"}]: dispatch 2026-03-08T23:17:32.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:31 vm02 bash[17457]: audit 2026-03-08T23:17:31.091190+0000 mon.a (mon.0) 287 : audit [INF] from='client.? 192.168.123.102:0/270246007' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4f9f4a95-f093-4c3b-af99-6c3664fdf90d"}]: dispatch 2026-03-08T23:17:32.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:31 vm02 bash[17457]: audit 2026-03-08T23:17:31.095801+0000 mon.a (mon.0) 288 : audit [INF] from='client.? 192.168.123.102:0/270246007' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4f9f4a95-f093-4c3b-af99-6c3664fdf90d"}]': finished 2026-03-08T23:17:32.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:31 vm02 bash[17457]: audit 2026-03-08T23:17:31.095801+0000 mon.a (mon.0) 288 : audit [INF] from='client.? 192.168.123.102:0/270246007' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4f9f4a95-f093-4c3b-af99-6c3664fdf90d"}]': finished 2026-03-08T23:17:32.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:31 vm02 bash[17457]: cluster 2026-03-08T23:17:31.098373+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-08T23:17:32.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:31 vm02 bash[17457]: cluster 2026-03-08T23:17:31.098373+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-08T23:17:32.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:31 vm02 bash[17457]: audit 2026-03-08T23:17:31.098576+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:32.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:31 vm02 bash[17457]: audit 2026-03-08T23:17:31.098576+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:32.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:31 vm02 bash[17457]: audit 2026-03-08T23:17:31.757599+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.102:0/188497936' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:17:32.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:31 vm02 bash[17457]: audit 2026-03-08T23:17:31.757599+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.102:0/188497936' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:17:32.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:31 vm10 bash[20034]: cluster 2026-03-08T23:17:30.181375+0000 mgr.x (mgr.14150) 80 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:32.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:31 vm10 bash[20034]: cluster 2026-03-08T23:17:30.181375+0000 mgr.x (mgr.14150) 80 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:32.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:31 vm10 bash[20034]: audit 2026-03-08T23:17:31.091190+0000 mon.a (mon.0) 287 : audit [INF] from='client.? 192.168.123.102:0/270246007' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4f9f4a95-f093-4c3b-af99-6c3664fdf90d"}]: dispatch 2026-03-08T23:17:32.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:31 vm10 bash[20034]: audit 2026-03-08T23:17:31.091190+0000 mon.a (mon.0) 287 : audit [INF] from='client.? 192.168.123.102:0/270246007' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4f9f4a95-f093-4c3b-af99-6c3664fdf90d"}]: dispatch 2026-03-08T23:17:32.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:31 vm10 bash[20034]: audit 2026-03-08T23:17:31.095801+0000 mon.a (mon.0) 288 : audit [INF] from='client.? 192.168.123.102:0/270246007' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4f9f4a95-f093-4c3b-af99-6c3664fdf90d"}]': finished 2026-03-08T23:17:32.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:31 vm10 bash[20034]: audit 2026-03-08T23:17:31.095801+0000 mon.a (mon.0) 288 : audit [INF] from='client.? 192.168.123.102:0/270246007' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4f9f4a95-f093-4c3b-af99-6c3664fdf90d"}]': finished 2026-03-08T23:17:32.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:31 vm10 bash[20034]: cluster 2026-03-08T23:17:31.098373+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-08T23:17:32.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:31 vm10 bash[20034]: cluster 2026-03-08T23:17:31.098373+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-08T23:17:32.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:31 vm10 bash[20034]: audit 2026-03-08T23:17:31.098576+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:32.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:31 vm10 bash[20034]: audit 2026-03-08T23:17:31.098576+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:32.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:31 vm10 bash[20034]: audit 2026-03-08T23:17:31.757599+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.102:0/188497936' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:17:32.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:31 vm10 bash[20034]: audit 2026-03-08T23:17:31.757599+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.102:0/188497936' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:17:34.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:33 vm04 bash[19918]: cluster 2026-03-08T23:17:32.181616+0000 mgr.x (mgr.14150) 81 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:34.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:33 vm04 bash[19918]: cluster 2026-03-08T23:17:32.181616+0000 mgr.x (mgr.14150) 81 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:34.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:33 vm02 bash[17457]: cluster 2026-03-08T23:17:32.181616+0000 mgr.x (mgr.14150) 81 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:34.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:33 vm02 bash[17457]: cluster 2026-03-08T23:17:32.181616+0000 mgr.x (mgr.14150) 81 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:34.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:33 vm10 bash[20034]: cluster 2026-03-08T23:17:32.181616+0000 mgr.x (mgr.14150) 81 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:34.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:33 vm10 bash[20034]: cluster 2026-03-08T23:17:32.181616+0000 mgr.x (mgr.14150) 81 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:36.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:35 vm04 bash[19918]: cluster 2026-03-08T23:17:34.181836+0000 mgr.x (mgr.14150) 82 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:36.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:35 vm04 bash[19918]: cluster 2026-03-08T23:17:34.181836+0000 mgr.x (mgr.14150) 82 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:36.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:35 vm02 bash[17457]: cluster 2026-03-08T23:17:34.181836+0000 mgr.x (mgr.14150) 82 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:36.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:35 vm02 bash[17457]: cluster 2026-03-08T23:17:34.181836+0000 mgr.x (mgr.14150) 82 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:36.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:35 vm10 bash[20034]: cluster 2026-03-08T23:17:34.181836+0000 mgr.x (mgr.14150) 82 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:36.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:35 vm10 bash[20034]: cluster 2026-03-08T23:17:34.181836+0000 mgr.x (mgr.14150) 82 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:38.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:37 vm04 bash[19918]: cluster 2026-03-08T23:17:36.182119+0000 mgr.x (mgr.14150) 83 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:38.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:37 vm04 bash[19918]: cluster 2026-03-08T23:17:36.182119+0000 mgr.x (mgr.14150) 83 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:38.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:37 vm02 bash[17457]: cluster 2026-03-08T23:17:36.182119+0000 mgr.x (mgr.14150) 83 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:38.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:37 vm02 bash[17457]: cluster 2026-03-08T23:17:36.182119+0000 mgr.x (mgr.14150) 83 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:38.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:37 vm10 bash[20034]: cluster 2026-03-08T23:17:36.182119+0000 mgr.x (mgr.14150) 83 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:38.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:37 vm10 bash[20034]: cluster 2026-03-08T23:17:36.182119+0000 mgr.x (mgr.14150) 83 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:40.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:39 vm02 bash[17457]: cluster 2026-03-08T23:17:38.182364+0000 mgr.x (mgr.14150) 84 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:40.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:39 vm02 bash[17457]: cluster 2026-03-08T23:17:38.182364+0000 mgr.x (mgr.14150) 84 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:40.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:39 vm04 bash[19918]: cluster 2026-03-08T23:17:38.182364+0000 mgr.x (mgr.14150) 84 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:40.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:39 vm04 bash[19918]: cluster 2026-03-08T23:17:38.182364+0000 mgr.x (mgr.14150) 84 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:40.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:39 vm10 bash[20034]: cluster 2026-03-08T23:17:38.182364+0000 mgr.x (mgr.14150) 84 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:40.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:39 vm10 bash[20034]: cluster 2026-03-08T23:17:38.182364+0000 mgr.x (mgr.14150) 84 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:41.056 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:41 vm02 bash[17457]: audit 2026-03-08T23:17:40.244689+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-08T23:17:41.056 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:41 vm02 bash[17457]: audit 2026-03-08T23:17:40.244689+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-08T23:17:41.056 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:41 vm02 bash[17457]: audit 2026-03-08T23:17:40.245130+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:41.056 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:41 vm02 bash[17457]: audit 2026-03-08T23:17:40.245130+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:41.328 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:41 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:17:41.328 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:41 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:17:41.329 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:17:41 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:17:41.329 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:17:41 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:17:41.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:41 vm04 bash[19918]: audit 2026-03-08T23:17:40.244689+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-08T23:17:41.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:41 vm04 bash[19918]: audit 2026-03-08T23:17:40.244689+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-08T23:17:41.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:41 vm04 bash[19918]: audit 2026-03-08T23:17:40.245130+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:41.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:41 vm04 bash[19918]: audit 2026-03-08T23:17:40.245130+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:41.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:41 vm10 bash[20034]: audit 2026-03-08T23:17:40.244689+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-08T23:17:41.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:41 vm10 bash[20034]: audit 2026-03-08T23:17:40.244689+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-08T23:17:41.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:41 vm10 bash[20034]: audit 2026-03-08T23:17:40.245130+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:41.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:41 vm10 bash[20034]: audit 2026-03-08T23:17:40.245130+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:42.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:42 vm04 bash[19918]: cluster 2026-03-08T23:17:40.182555+0000 mgr.x (mgr.14150) 85 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:42.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:42 vm04 bash[19918]: cluster 2026-03-08T23:17:40.182555+0000 mgr.x (mgr.14150) 85 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:42.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:42 vm04 bash[19918]: cephadm 2026-03-08T23:17:40.245468+0000 mgr.x (mgr.14150) 86 : cephadm [INF] Deploying daemon osd.0 on vm02 2026-03-08T23:17:42.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:42 vm04 bash[19918]: cephadm 2026-03-08T23:17:40.245468+0000 mgr.x (mgr.14150) 86 : cephadm [INF] Deploying daemon osd.0 on vm02 2026-03-08T23:17:42.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:42 vm04 bash[19918]: audit 2026-03-08T23:17:41.367331+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:42.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:42 vm04 bash[19918]: audit 2026-03-08T23:17:41.367331+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:42.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:42 vm04 bash[19918]: audit 2026-03-08T23:17:41.382556+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:42.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:42 vm04 bash[19918]: audit 2026-03-08T23:17:41.382556+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:42.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:42 vm04 bash[19918]: audit 2026-03-08T23:17:41.406458+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:42.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:42 vm04 bash[19918]: audit 2026-03-08T23:17:41.406458+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:42 vm02 bash[17457]: cluster 2026-03-08T23:17:40.182555+0000 mgr.x (mgr.14150) 85 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:42 vm02 bash[17457]: cluster 2026-03-08T23:17:40.182555+0000 mgr.x (mgr.14150) 85 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:42 vm02 bash[17457]: cephadm 2026-03-08T23:17:40.245468+0000 mgr.x (mgr.14150) 86 : cephadm [INF] Deploying daemon osd.0 on vm02 2026-03-08T23:17:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:42 vm02 bash[17457]: cephadm 2026-03-08T23:17:40.245468+0000 mgr.x (mgr.14150) 86 : cephadm [INF] Deploying daemon osd.0 on vm02 2026-03-08T23:17:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:42 vm02 bash[17457]: audit 2026-03-08T23:17:41.367331+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:42 vm02 bash[17457]: audit 2026-03-08T23:17:41.367331+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:42 vm02 bash[17457]: audit 2026-03-08T23:17:41.382556+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:42 vm02 bash[17457]: audit 2026-03-08T23:17:41.382556+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:42 vm02 bash[17457]: audit 2026-03-08T23:17:41.406458+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:42.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:42 vm02 bash[17457]: audit 2026-03-08T23:17:41.406458+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:42.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:42 vm10 bash[20034]: cluster 2026-03-08T23:17:40.182555+0000 mgr.x (mgr.14150) 85 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:42.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:42 vm10 bash[20034]: cluster 2026-03-08T23:17:40.182555+0000 mgr.x (mgr.14150) 85 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:42.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:42 vm10 bash[20034]: cephadm 2026-03-08T23:17:40.245468+0000 mgr.x (mgr.14150) 86 : cephadm [INF] Deploying daemon osd.0 on vm02 2026-03-08T23:17:42.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:42 vm10 bash[20034]: cephadm 2026-03-08T23:17:40.245468+0000 mgr.x (mgr.14150) 86 : cephadm [INF] Deploying daemon osd.0 on vm02 2026-03-08T23:17:42.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:42 vm10 bash[20034]: audit 2026-03-08T23:17:41.367331+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:42.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:42 vm10 bash[20034]: audit 2026-03-08T23:17:41.367331+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:42.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:42 vm10 bash[20034]: audit 2026-03-08T23:17:41.382556+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:42.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:42 vm10 bash[20034]: audit 2026-03-08T23:17:41.382556+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:42.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:42 vm10 bash[20034]: audit 2026-03-08T23:17:41.406458+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:42.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:42 vm10 bash[20034]: audit 2026-03-08T23:17:41.406458+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:44.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:44 vm04 bash[19918]: cluster 2026-03-08T23:17:42.182808+0000 mgr.x (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:44 vm04 bash[19918]: cluster 2026-03-08T23:17:42.182808+0000 mgr.x (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:44.393 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:44 vm02 bash[17457]: cluster 2026-03-08T23:17:42.182808+0000 mgr.x (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:44 vm02 bash[17457]: cluster 2026-03-08T23:17:42.182808+0000 mgr.x (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:44 vm10 bash[20034]: cluster 2026-03-08T23:17:42.182808+0000 mgr.x (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:44 vm10 bash[20034]: cluster 2026-03-08T23:17:42.182808+0000 mgr.x (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:45.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:45 vm04 bash[19918]: audit 2026-03-08T23:17:44.716534+0000 mon.a (mon.0) 296 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-08T23:17:45.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:45 vm04 bash[19918]: audit 2026-03-08T23:17:44.716534+0000 mon.a (mon.0) 296 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-08T23:17:45.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:45 vm02 bash[17457]: audit 2026-03-08T23:17:44.716534+0000 mon.a (mon.0) 296 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-08T23:17:45.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:45 vm02 bash[17457]: audit 2026-03-08T23:17:44.716534+0000 mon.a (mon.0) 296 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-08T23:17:45.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:45 vm10 bash[20034]: audit 2026-03-08T23:17:44.716534+0000 mon.a (mon.0) 296 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-08T23:17:45.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:45 vm10 bash[20034]: audit 2026-03-08T23:17:44.716534+0000 mon.a (mon.0) 296 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-08T23:17:46.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:46 vm04 bash[19918]: cluster 2026-03-08T23:17:44.183019+0000 mgr.x (mgr.14150) 88 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:46.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:46 vm04 bash[19918]: cluster 2026-03-08T23:17:44.183019+0000 mgr.x (mgr.14150) 88 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:46.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:46 vm04 bash[19918]: audit 2026-03-08T23:17:45.036256+0000 mon.a (mon.0) 297 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-08T23:17:46.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:46 vm04 bash[19918]: audit 2026-03-08T23:17:45.036256+0000 mon.a (mon.0) 297 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-08T23:17:46.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:46 vm04 bash[19918]: cluster 2026-03-08T23:17:45.037511+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-08T23:17:46.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:46 vm04 bash[19918]: cluster 2026-03-08T23:17:45.037511+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-08T23:17:46.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:46 vm04 bash[19918]: audit 2026-03-08T23:17:45.037689+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-08T23:17:46.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:46 vm04 bash[19918]: audit 2026-03-08T23:17:45.037689+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-08T23:17:46.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:46 vm04 bash[19918]: audit 2026-03-08T23:17:45.037774+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:46.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:46 vm04 bash[19918]: audit 2026-03-08T23:17:45.037774+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:46.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:46 vm02 bash[17457]: cluster 2026-03-08T23:17:44.183019+0000 mgr.x (mgr.14150) 88 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:46.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:46 vm02 bash[17457]: cluster 2026-03-08T23:17:44.183019+0000 mgr.x (mgr.14150) 88 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:46.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:46 vm02 bash[17457]: audit 2026-03-08T23:17:45.036256+0000 mon.a (mon.0) 297 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-08T23:17:46.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:46 vm02 bash[17457]: audit 2026-03-08T23:17:45.036256+0000 mon.a (mon.0) 297 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-08T23:17:46.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:46 vm02 bash[17457]: cluster 2026-03-08T23:17:45.037511+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-08T23:17:46.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:46 vm02 bash[17457]: cluster 2026-03-08T23:17:45.037511+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-08T23:17:46.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:46 vm02 bash[17457]: audit 2026-03-08T23:17:45.037689+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-08T23:17:46.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:46 vm02 bash[17457]: audit 2026-03-08T23:17:45.037689+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-08T23:17:46.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:46 vm02 bash[17457]: audit 2026-03-08T23:17:45.037774+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:46.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:46 vm02 bash[17457]: audit 2026-03-08T23:17:45.037774+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:46.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:46 vm10 bash[20034]: cluster 2026-03-08T23:17:44.183019+0000 mgr.x (mgr.14150) 88 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:46.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:46 vm10 bash[20034]: cluster 2026-03-08T23:17:44.183019+0000 mgr.x (mgr.14150) 88 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:46.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:46 vm10 bash[20034]: audit 2026-03-08T23:17:45.036256+0000 mon.a (mon.0) 297 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-08T23:17:46.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:46 vm10 bash[20034]: audit 2026-03-08T23:17:45.036256+0000 mon.a (mon.0) 297 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-08T23:17:46.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:46 vm10 bash[20034]: cluster 2026-03-08T23:17:45.037511+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-08T23:17:46.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:46 vm10 bash[20034]: cluster 2026-03-08T23:17:45.037511+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-08T23:17:46.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:46 vm10 bash[20034]: audit 2026-03-08T23:17:45.037689+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-08T23:17:46.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:46 vm10 bash[20034]: audit 2026-03-08T23:17:45.037689+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-08T23:17:46.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:46 vm10 bash[20034]: audit 2026-03-08T23:17:45.037774+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:46.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:46 vm10 bash[20034]: audit 2026-03-08T23:17:45.037774+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:47.360 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:47 vm02 bash[17457]: audit 2026-03-08T23:17:46.039011+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-08T23:17:47.360 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:47 vm02 bash[17457]: audit 2026-03-08T23:17:46.039011+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-08T23:17:47.360 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:47 vm02 bash[17457]: cluster 2026-03-08T23:17:46.041328+0000 mon.a (mon.0) 302 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-08T23:17:47.360 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:47 vm02 bash[17457]: cluster 2026-03-08T23:17:46.041328+0000 mon.a (mon.0) 302 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-08T23:17:47.360 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:47 vm02 bash[17457]: audit 2026-03-08T23:17:46.042146+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:47.360 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:47 vm02 bash[17457]: audit 2026-03-08T23:17:46.042146+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:47.360 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:47 vm02 bash[17457]: audit 2026-03-08T23:17:46.049367+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:47.360 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:47 vm02 bash[17457]: audit 2026-03-08T23:17:46.049367+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:47.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:47 vm04 bash[19918]: audit 2026-03-08T23:17:46.039011+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-08T23:17:47.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:47 vm04 bash[19918]: audit 2026-03-08T23:17:46.039011+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-08T23:17:47.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:47 vm04 bash[19918]: cluster 2026-03-08T23:17:46.041328+0000 mon.a (mon.0) 302 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-08T23:17:47.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:47 vm04 bash[19918]: cluster 2026-03-08T23:17:46.041328+0000 mon.a (mon.0) 302 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-08T23:17:47.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:47 vm04 bash[19918]: audit 2026-03-08T23:17:46.042146+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:47.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:47 vm04 bash[19918]: audit 2026-03-08T23:17:46.042146+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:47.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:47 vm04 bash[19918]: audit 2026-03-08T23:17:46.049367+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:47.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:47 vm04 bash[19918]: audit 2026-03-08T23:17:46.049367+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:47.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:47 vm10 bash[20034]: audit 2026-03-08T23:17:46.039011+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-08T23:17:47.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:47 vm10 bash[20034]: audit 2026-03-08T23:17:46.039011+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-08T23:17:47.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:47 vm10 bash[20034]: cluster 2026-03-08T23:17:46.041328+0000 mon.a (mon.0) 302 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-08T23:17:47.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:47 vm10 bash[20034]: cluster 2026-03-08T23:17:46.041328+0000 mon.a (mon.0) 302 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-08T23:17:47.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:47 vm10 bash[20034]: audit 2026-03-08T23:17:46.042146+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:47.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:47 vm10 bash[20034]: audit 2026-03-08T23:17:46.042146+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:47.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:47 vm10 bash[20034]: audit 2026-03-08T23:17:46.049367+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:47.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:47 vm10 bash[20034]: audit 2026-03-08T23:17:46.049367+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:48.358 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: cluster 2026-03-08T23:17:45.685093+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: cluster 2026-03-08T23:17:45.685093+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: cluster 2026-03-08T23:17:45.685152+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: cluster 2026-03-08T23:17:45.685152+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: cluster 2026-03-08T23:17:46.183224+0000 mgr.x (mgr.14150) 89 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: cluster 2026-03-08T23:17:46.183224+0000 mgr.x (mgr.14150) 89 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: audit 2026-03-08T23:17:47.044842+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: audit 2026-03-08T23:17:47.044842+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: cluster 2026-03-08T23:17:47.049697+0000 mon.a (mon.0) 306 : cluster [INF] osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410] boot 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: cluster 2026-03-08T23:17:47.049697+0000 mon.a (mon.0) 306 : cluster [INF] osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410] boot 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: cluster 2026-03-08T23:17:47.049782+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: cluster 2026-03-08T23:17:47.049782+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: audit 2026-03-08T23:17:47.051089+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: audit 2026-03-08T23:17:47.051089+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: audit 2026-03-08T23:17:47.614319+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: audit 2026-03-08T23:17:47.614319+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: audit 2026-03-08T23:17:47.621114+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: audit 2026-03-08T23:17:47.621114+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: audit 2026-03-08T23:17:48.014169+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: audit 2026-03-08T23:17:48.014169+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: audit 2026-03-08T23:17:48.014969+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: audit 2026-03-08T23:17:48.014969+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: audit 2026-03-08T23:17:48.021601+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.359 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:48 vm02 bash[17457]: audit 2026-03-08T23:17:48.021601+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: cluster 2026-03-08T23:17:45.685093+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:17:48.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: cluster 2026-03-08T23:17:45.685093+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:17:48.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: cluster 2026-03-08T23:17:45.685152+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:17:48.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: cluster 2026-03-08T23:17:45.685152+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: cluster 2026-03-08T23:17:46.183224+0000 mgr.x (mgr.14150) 89 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: cluster 2026-03-08T23:17:46.183224+0000 mgr.x (mgr.14150) 89 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: audit 2026-03-08T23:17:47.044842+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: audit 2026-03-08T23:17:47.044842+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: cluster 2026-03-08T23:17:47.049697+0000 mon.a (mon.0) 306 : cluster [INF] osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410] boot 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: cluster 2026-03-08T23:17:47.049697+0000 mon.a (mon.0) 306 : cluster [INF] osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410] boot 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: cluster 2026-03-08T23:17:47.049782+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: cluster 2026-03-08T23:17:47.049782+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: audit 2026-03-08T23:17:47.051089+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: audit 2026-03-08T23:17:47.051089+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: audit 2026-03-08T23:17:47.614319+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: audit 2026-03-08T23:17:47.614319+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: audit 2026-03-08T23:17:47.621114+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: audit 2026-03-08T23:17:47.621114+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: audit 2026-03-08T23:17:48.014169+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: audit 2026-03-08T23:17:48.014169+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: audit 2026-03-08T23:17:48.014969+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: audit 2026-03-08T23:17:48.014969+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: audit 2026-03-08T23:17:48.021601+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:48 vm04 bash[19918]: audit 2026-03-08T23:17:48.021601+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: cluster 2026-03-08T23:17:45.685093+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:17:48.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: cluster 2026-03-08T23:17:45.685093+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:17:48.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: cluster 2026-03-08T23:17:45.685152+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:17:48.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: cluster 2026-03-08T23:17:45.685152+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: cluster 2026-03-08T23:17:46.183224+0000 mgr.x (mgr.14150) 89 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: cluster 2026-03-08T23:17:46.183224+0000 mgr.x (mgr.14150) 89 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: audit 2026-03-08T23:17:47.044842+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: audit 2026-03-08T23:17:47.044842+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: cluster 2026-03-08T23:17:47.049697+0000 mon.a (mon.0) 306 : cluster [INF] osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410] boot 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: cluster 2026-03-08T23:17:47.049697+0000 mon.a (mon.0) 306 : cluster [INF] osd.0 [v2:192.168.123.102:6802/706196410,v1:192.168.123.102:6803/706196410] boot 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: cluster 2026-03-08T23:17:47.049782+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: cluster 2026-03-08T23:17:47.049782+0000 mon.a (mon.0) 307 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: audit 2026-03-08T23:17:47.051089+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: audit 2026-03-08T23:17:47.051089+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: audit 2026-03-08T23:17:47.614319+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: audit 2026-03-08T23:17:47.614319+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: audit 2026-03-08T23:17:47.621114+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: audit 2026-03-08T23:17:47.621114+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: audit 2026-03-08T23:17:48.014169+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: audit 2026-03-08T23:17:48.014169+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: audit 2026-03-08T23:17:48.014969+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: audit 2026-03-08T23:17:48.014969+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: audit 2026-03-08T23:17:48.021601+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:48 vm10 bash[20034]: audit 2026-03-08T23:17:48.021601+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:48.683 INFO:teuthology.orchestra.run.vm02.stdout:Created osd(s) 0 on host 'vm02' 2026-03-08T23:17:48.770 DEBUG:teuthology.orchestra.run.vm02:osd.0> sudo journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.0.service 2026-03-08T23:17:48.771 INFO:tasks.cephadm:Deploying osd.1 on vm02 with /dev/vdd... 2026-03-08T23:17:48.771 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- lvm zap /dev/vdd 2026-03-08T23:17:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:49 vm02 bash[17457]: cluster 2026-03-08T23:17:48.183465+0000 mgr.x (mgr.14150) 90 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:49 vm02 bash[17457]: cluster 2026-03-08T23:17:48.183465+0000 mgr.x (mgr.14150) 90 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:49 vm02 bash[17457]: cluster 2026-03-08T23:17:48.624537+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-08T23:17:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:49 vm02 bash[17457]: cluster 2026-03-08T23:17:48.624537+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-08T23:17:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:49 vm02 bash[17457]: audit 2026-03-08T23:17:48.658608+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:49 vm02 bash[17457]: audit 2026-03-08T23:17:48.658608+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:49 vm02 bash[17457]: audit 2026-03-08T23:17:48.664166+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:49 vm02 bash[17457]: audit 2026-03-08T23:17:48.664166+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:49 vm02 bash[17457]: audit 2026-03-08T23:17:48.679126+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:49 vm02 bash[17457]: audit 2026-03-08T23:17:48.679126+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:49 vm10 bash[20034]: cluster 2026-03-08T23:17:48.183465+0000 mgr.x (mgr.14150) 90 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:49 vm10 bash[20034]: cluster 2026-03-08T23:17:48.183465+0000 mgr.x (mgr.14150) 90 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:49 vm10 bash[20034]: cluster 2026-03-08T23:17:48.624537+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-08T23:17:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:49 vm10 bash[20034]: cluster 2026-03-08T23:17:48.624537+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-08T23:17:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:49 vm10 bash[20034]: audit 2026-03-08T23:17:48.658608+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:49 vm10 bash[20034]: audit 2026-03-08T23:17:48.658608+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:49 vm10 bash[20034]: audit 2026-03-08T23:17:48.664166+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:49 vm10 bash[20034]: audit 2026-03-08T23:17:48.664166+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:49 vm10 bash[20034]: audit 2026-03-08T23:17:48.679126+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:49.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:49 vm10 bash[20034]: audit 2026-03-08T23:17:48.679126+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:50.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:49 vm04 bash[19918]: cluster 2026-03-08T23:17:48.183465+0000 mgr.x (mgr.14150) 90 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:50.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:49 vm04 bash[19918]: cluster 2026-03-08T23:17:48.183465+0000 mgr.x (mgr.14150) 90 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:50.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:49 vm04 bash[19918]: cluster 2026-03-08T23:17:48.624537+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-08T23:17:50.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:49 vm04 bash[19918]: cluster 2026-03-08T23:17:48.624537+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-08T23:17:50.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:49 vm04 bash[19918]: audit 2026-03-08T23:17:48.658608+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:50.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:49 vm04 bash[19918]: audit 2026-03-08T23:17:48.658608+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:17:50.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:49 vm04 bash[19918]: audit 2026-03-08T23:17:48.664166+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:50.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:49 vm04 bash[19918]: audit 2026-03-08T23:17:48.664166+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:50.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:49 vm04 bash[19918]: audit 2026-03-08T23:17:48.679126+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:50.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:49 vm04 bash[19918]: audit 2026-03-08T23:17:48.679126+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:51.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:51 vm02 bash[17457]: cluster 2026-03-08T23:17:50.183706+0000 mgr.x (mgr.14150) 91 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:51.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:51 vm02 bash[17457]: cluster 2026-03-08T23:17:50.183706+0000 mgr.x (mgr.14150) 91 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:51.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:51 vm10 bash[20034]: cluster 2026-03-08T23:17:50.183706+0000 mgr.x (mgr.14150) 91 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:51.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:51 vm10 bash[20034]: cluster 2026-03-08T23:17:50.183706+0000 mgr.x (mgr.14150) 91 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:52.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:51 vm04 bash[19918]: cluster 2026-03-08T23:17:50.183706+0000 mgr.x (mgr.14150) 91 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:52.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:51 vm04 bash[19918]: cluster 2026-03-08T23:17:50.183706+0000 mgr.x (mgr.14150) 91 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:53.431 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:17:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:53 vm02 bash[17457]: cluster 2026-03-08T23:17:52.183958+0000 mgr.x (mgr.14150) 92 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:53 vm02 bash[17457]: cluster 2026-03-08T23:17:52.183958+0000 mgr.x (mgr.14150) 92 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:53.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:53 vm10 bash[20034]: cluster 2026-03-08T23:17:52.183958+0000 mgr.x (mgr.14150) 92 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:53.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:53 vm10 bash[20034]: cluster 2026-03-08T23:17:52.183958+0000 mgr.x (mgr.14150) 92 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:54.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:53 vm04 bash[19918]: cluster 2026-03-08T23:17:52.183958+0000 mgr.x (mgr.14150) 92 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:54.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:53 vm04 bash[19918]: cluster 2026-03-08T23:17:52.183958+0000 mgr.x (mgr.14150) 92 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:55.065 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:17:55.078 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph orch daemon add osd vm02:/dev/vdd 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: cluster 2026-03-08T23:17:54.184250+0000 mgr.x (mgr.14150) 93 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: cluster 2026-03-08T23:17:54.184250+0000 mgr.x (mgr.14150) 93 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: cephadm 2026-03-08T23:17:54.310806+0000 mgr.x (mgr.14150) 94 : cephadm [INF] Detected new or changed devices on vm02 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: cephadm 2026-03-08T23:17:54.310806+0000 mgr.x (mgr.14150) 94 : cephadm [INF] Detected new or changed devices on vm02 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: audit 2026-03-08T23:17:54.316703+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: audit 2026-03-08T23:17:54.316703+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: audit 2026-03-08T23:17:54.322459+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: audit 2026-03-08T23:17:54.322459+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: audit 2026-03-08T23:17:54.323418+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: audit 2026-03-08T23:17:54.323418+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: cephadm 2026-03-08T23:17:54.323967+0000 mgr.x (mgr.14150) 95 : cephadm [INF] Adjusting osd_memory_target on vm02 to 455.7M 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: cephadm 2026-03-08T23:17:54.323967+0000 mgr.x (mgr.14150) 95 : cephadm [INF] Adjusting osd_memory_target on vm02 to 455.7M 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: cephadm 2026-03-08T23:17:54.324380+0000 mgr.x (mgr.14150) 96 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: cephadm 2026-03-08T23:17:54.324380+0000 mgr.x (mgr.14150) 96 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: audit 2026-03-08T23:17:54.324663+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: audit 2026-03-08T23:17:54.324663+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: audit 2026-03-08T23:17:54.325085+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: audit 2026-03-08T23:17:54.325085+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: audit 2026-03-08T23:17:54.329298+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.321 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:55 vm02 bash[17457]: audit 2026-03-08T23:17:54.329298+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: cluster 2026-03-08T23:17:54.184250+0000 mgr.x (mgr.14150) 93 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: cluster 2026-03-08T23:17:54.184250+0000 mgr.x (mgr.14150) 93 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: cephadm 2026-03-08T23:17:54.310806+0000 mgr.x (mgr.14150) 94 : cephadm [INF] Detected new or changed devices on vm02 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: cephadm 2026-03-08T23:17:54.310806+0000 mgr.x (mgr.14150) 94 : cephadm [INF] Detected new or changed devices on vm02 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: audit 2026-03-08T23:17:54.316703+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: audit 2026-03-08T23:17:54.316703+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: audit 2026-03-08T23:17:54.322459+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: audit 2026-03-08T23:17:54.322459+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: audit 2026-03-08T23:17:54.323418+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: audit 2026-03-08T23:17:54.323418+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: cephadm 2026-03-08T23:17:54.323967+0000 mgr.x (mgr.14150) 95 : cephadm [INF] Adjusting osd_memory_target on vm02 to 455.7M 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: cephadm 2026-03-08T23:17:54.323967+0000 mgr.x (mgr.14150) 95 : cephadm [INF] Adjusting osd_memory_target on vm02 to 455.7M 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: cephadm 2026-03-08T23:17:54.324380+0000 mgr.x (mgr.14150) 96 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: cephadm 2026-03-08T23:17:54.324380+0000 mgr.x (mgr.14150) 96 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: audit 2026-03-08T23:17:54.324663+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: audit 2026-03-08T23:17:54.324663+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: audit 2026-03-08T23:17:54.325085+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: audit 2026-03-08T23:17:54.325085+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: audit 2026-03-08T23:17:54.329298+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:55 vm04 bash[19918]: audit 2026-03-08T23:17:54.329298+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: cluster 2026-03-08T23:17:54.184250+0000 mgr.x (mgr.14150) 93 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:55.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: cluster 2026-03-08T23:17:54.184250+0000 mgr.x (mgr.14150) 93 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:55.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: cephadm 2026-03-08T23:17:54.310806+0000 mgr.x (mgr.14150) 94 : cephadm [INF] Detected new or changed devices on vm02 2026-03-08T23:17:55.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: cephadm 2026-03-08T23:17:54.310806+0000 mgr.x (mgr.14150) 94 : cephadm [INF] Detected new or changed devices on vm02 2026-03-08T23:17:55.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: audit 2026-03-08T23:17:54.316703+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: audit 2026-03-08T23:17:54.316703+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: audit 2026-03-08T23:17:54.322459+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: audit 2026-03-08T23:17:54.322459+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: audit 2026-03-08T23:17:54.323418+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:17:55.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: audit 2026-03-08T23:17:54.323418+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:17:55.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: cephadm 2026-03-08T23:17:54.323967+0000 mgr.x (mgr.14150) 95 : cephadm [INF] Adjusting osd_memory_target on vm02 to 455.7M 2026-03-08T23:17:55.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: cephadm 2026-03-08T23:17:54.323967+0000 mgr.x (mgr.14150) 95 : cephadm [INF] Adjusting osd_memory_target on vm02 to 455.7M 2026-03-08T23:17:55.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: cephadm 2026-03-08T23:17:54.324380+0000 mgr.x (mgr.14150) 96 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-08T23:17:55.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: cephadm 2026-03-08T23:17:54.324380+0000 mgr.x (mgr.14150) 96 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-08T23:17:55.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: audit 2026-03-08T23:17:54.324663+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:55.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: audit 2026-03-08T23:17:54.324663+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:17:55.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: audit 2026-03-08T23:17:54.325085+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:55.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: audit 2026-03-08T23:17:54.325085+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:17:55.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: audit 2026-03-08T23:17:54.329298+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:55.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:55 vm10 bash[20034]: audit 2026-03-08T23:17:54.329298+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:17:57.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:57 vm04 bash[19918]: cluster 2026-03-08T23:17:56.184480+0000 mgr.x (mgr.14150) 97 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:57.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:57 vm04 bash[19918]: cluster 2026-03-08T23:17:56.184480+0000 mgr.x (mgr.14150) 97 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:57.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:57 vm02 bash[17457]: cluster 2026-03-08T23:17:56.184480+0000 mgr.x (mgr.14150) 97 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:57.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:57 vm02 bash[17457]: cluster 2026-03-08T23:17:56.184480+0000 mgr.x (mgr.14150) 97 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:57.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:57 vm10 bash[20034]: cluster 2026-03-08T23:17:56.184480+0000 mgr.x (mgr.14150) 97 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:57.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:57 vm10 bash[20034]: cluster 2026-03-08T23:17:56.184480+0000 mgr.x (mgr.14150) 97 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:59.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:59 vm04 bash[19918]: cluster 2026-03-08T23:17:58.184680+0000 mgr.x (mgr.14150) 98 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:59.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:17:59 vm04 bash[19918]: cluster 2026-03-08T23:17:58.184680+0000 mgr.x (mgr.14150) 98 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:59.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:59 vm02 bash[17457]: cluster 2026-03-08T23:17:58.184680+0000 mgr.x (mgr.14150) 98 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:59.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:17:59 vm02 bash[17457]: cluster 2026-03-08T23:17:58.184680+0000 mgr.x (mgr.14150) 98 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:59.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:59 vm10 bash[20034]: cluster 2026-03-08T23:17:58.184680+0000 mgr.x (mgr.14150) 98 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:59.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:17:59 vm10 bash[20034]: cluster 2026-03-08T23:17:58.184680+0000 mgr.x (mgr.14150) 98 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:17:59.692 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:18:00.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:00 vm04 bash[19918]: audit 2026-03-08T23:17:59.977270+0000 mgr.x (mgr.14150) 99 : audit [DBG] from='client.14235 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:18:00.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:00 vm04 bash[19918]: audit 2026-03-08T23:17:59.977270+0000 mgr.x (mgr.14150) 99 : audit [DBG] from='client.14235 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:18:00.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:00 vm04 bash[19918]: audit 2026-03-08T23:17:59.978549+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:18:00.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:00 vm04 bash[19918]: audit 2026-03-08T23:17:59.978549+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:18:00.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:00 vm04 bash[19918]: audit 2026-03-08T23:17:59.979937+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:18:00.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:00 vm04 bash[19918]: audit 2026-03-08T23:17:59.979937+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:18:00.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:00 vm04 bash[19918]: audit 2026-03-08T23:17:59.980385+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:00.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:00 vm04 bash[19918]: audit 2026-03-08T23:17:59.980385+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:00.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:00 vm02 bash[17457]: audit 2026-03-08T23:17:59.977270+0000 mgr.x (mgr.14150) 99 : audit [DBG] from='client.14235 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:18:00.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:00 vm02 bash[17457]: audit 2026-03-08T23:17:59.977270+0000 mgr.x (mgr.14150) 99 : audit [DBG] from='client.14235 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:18:00.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:00 vm02 bash[17457]: audit 2026-03-08T23:17:59.978549+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:18:00.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:00 vm02 bash[17457]: audit 2026-03-08T23:17:59.978549+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:18:00.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:00 vm02 bash[17457]: audit 2026-03-08T23:17:59.979937+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:18:00.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:00 vm02 bash[17457]: audit 2026-03-08T23:17:59.979937+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:18:00.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:00 vm02 bash[17457]: audit 2026-03-08T23:17:59.980385+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:00.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:00 vm02 bash[17457]: audit 2026-03-08T23:17:59.980385+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:00.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:00 vm10 bash[20034]: audit 2026-03-08T23:17:59.977270+0000 mgr.x (mgr.14150) 99 : audit [DBG] from='client.14235 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:18:00.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:00 vm10 bash[20034]: audit 2026-03-08T23:17:59.977270+0000 mgr.x (mgr.14150) 99 : audit [DBG] from='client.14235 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:18:00.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:00 vm10 bash[20034]: audit 2026-03-08T23:17:59.978549+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:18:00.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:00 vm10 bash[20034]: audit 2026-03-08T23:17:59.978549+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:18:00.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:00 vm10 bash[20034]: audit 2026-03-08T23:17:59.979937+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:18:00.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:00 vm10 bash[20034]: audit 2026-03-08T23:17:59.979937+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:18:00.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:00 vm10 bash[20034]: audit 2026-03-08T23:17:59.980385+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:00.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:00 vm10 bash[20034]: audit 2026-03-08T23:17:59.980385+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:01.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:01 vm04 bash[19918]: cluster 2026-03-08T23:18:00.184955+0000 mgr.x (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:01.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:01 vm04 bash[19918]: cluster 2026-03-08T23:18:00.184955+0000 mgr.x (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:01.643 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:01 vm02 bash[17457]: cluster 2026-03-08T23:18:00.184955+0000 mgr.x (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:01.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:01 vm02 bash[17457]: cluster 2026-03-08T23:18:00.184955+0000 mgr.x (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:01.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:01 vm10 bash[20034]: cluster 2026-03-08T23:18:00.184955+0000 mgr.x (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:01.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:01 vm10 bash[20034]: cluster 2026-03-08T23:18:00.184955+0000 mgr.x (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:03.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:03 vm04 bash[19918]: cluster 2026-03-08T23:18:02.185207+0000 mgr.x (mgr.14150) 101 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:03.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:03 vm04 bash[19918]: cluster 2026-03-08T23:18:02.185207+0000 mgr.x (mgr.14150) 101 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:03.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:03 vm02 bash[17457]: cluster 2026-03-08T23:18:02.185207+0000 mgr.x (mgr.14150) 101 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:03.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:03 vm02 bash[17457]: cluster 2026-03-08T23:18:02.185207+0000 mgr.x (mgr.14150) 101 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:03.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:03 vm10 bash[20034]: cluster 2026-03-08T23:18:02.185207+0000 mgr.x (mgr.14150) 101 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:03.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:03 vm10 bash[20034]: cluster 2026-03-08T23:18:02.185207+0000 mgr.x (mgr.14150) 101 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:05.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:05 vm04 bash[19918]: cluster 2026-03-08T23:18:04.185426+0000 mgr.x (mgr.14150) 102 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:05.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:05 vm04 bash[19918]: cluster 2026-03-08T23:18:04.185426+0000 mgr.x (mgr.14150) 102 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:05.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:05 vm02 bash[17457]: cluster 2026-03-08T23:18:04.185426+0000 mgr.x (mgr.14150) 102 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:05.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:05 vm02 bash[17457]: cluster 2026-03-08T23:18:04.185426+0000 mgr.x (mgr.14150) 102 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:05.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:05 vm10 bash[20034]: cluster 2026-03-08T23:18:04.185426+0000 mgr.x (mgr.14150) 102 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:05.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:05 vm10 bash[20034]: cluster 2026-03-08T23:18:04.185426+0000 mgr.x (mgr.14150) 102 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:06.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:06 vm04 bash[19918]: audit 2026-03-08T23:18:05.407143+0000 mon.a (mon.0) 327 : audit [INF] from='client.? 192.168.123.102:0/2312310959' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "329d7c16-85bb-4531-9c68-b1e468e49038"}]: dispatch 2026-03-08T23:18:06.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:06 vm04 bash[19918]: audit 2026-03-08T23:18:05.407143+0000 mon.a (mon.0) 327 : audit [INF] from='client.? 192.168.123.102:0/2312310959' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "329d7c16-85bb-4531-9c68-b1e468e49038"}]: dispatch 2026-03-08T23:18:06.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:06 vm04 bash[19918]: audit 2026-03-08T23:18:05.409700+0000 mon.a (mon.0) 328 : audit [INF] from='client.? 192.168.123.102:0/2312310959' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "329d7c16-85bb-4531-9c68-b1e468e49038"}]': finished 2026-03-08T23:18:06.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:06 vm04 bash[19918]: audit 2026-03-08T23:18:05.409700+0000 mon.a (mon.0) 328 : audit [INF] from='client.? 192.168.123.102:0/2312310959' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "329d7c16-85bb-4531-9c68-b1e468e49038"}]': finished 2026-03-08T23:18:06.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:06 vm04 bash[19918]: cluster 2026-03-08T23:18:05.413014+0000 mon.a (mon.0) 329 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-08T23:18:06.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:06 vm04 bash[19918]: cluster 2026-03-08T23:18:05.413014+0000 mon.a (mon.0) 329 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-08T23:18:06.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:06 vm04 bash[19918]: audit 2026-03-08T23:18:05.413249+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:06.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:06 vm04 bash[19918]: audit 2026-03-08T23:18:05.413249+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:06.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:06 vm04 bash[19918]: audit 2026-03-08T23:18:06.053078+0000 mon.a (mon.0) 331 : audit [DBG] from='client.? 192.168.123.102:0/1079901665' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:18:06.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:06 vm04 bash[19918]: audit 2026-03-08T23:18:06.053078+0000 mon.a (mon.0) 331 : audit [DBG] from='client.? 192.168.123.102:0/1079901665' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:18:06.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:06 vm02 bash[17457]: audit 2026-03-08T23:18:05.407143+0000 mon.a (mon.0) 327 : audit [INF] from='client.? 192.168.123.102:0/2312310959' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "329d7c16-85bb-4531-9c68-b1e468e49038"}]: dispatch 2026-03-08T23:18:06.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:06 vm02 bash[17457]: audit 2026-03-08T23:18:05.407143+0000 mon.a (mon.0) 327 : audit [INF] from='client.? 192.168.123.102:0/2312310959' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "329d7c16-85bb-4531-9c68-b1e468e49038"}]: dispatch 2026-03-08T23:18:06.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:06 vm02 bash[17457]: audit 2026-03-08T23:18:05.409700+0000 mon.a (mon.0) 328 : audit [INF] from='client.? 192.168.123.102:0/2312310959' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "329d7c16-85bb-4531-9c68-b1e468e49038"}]': finished 2026-03-08T23:18:06.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:06 vm02 bash[17457]: audit 2026-03-08T23:18:05.409700+0000 mon.a (mon.0) 328 : audit [INF] from='client.? 192.168.123.102:0/2312310959' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "329d7c16-85bb-4531-9c68-b1e468e49038"}]': finished 2026-03-08T23:18:06.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:06 vm02 bash[17457]: cluster 2026-03-08T23:18:05.413014+0000 mon.a (mon.0) 329 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-08T23:18:06.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:06 vm02 bash[17457]: cluster 2026-03-08T23:18:05.413014+0000 mon.a (mon.0) 329 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-08T23:18:06.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:06 vm02 bash[17457]: audit 2026-03-08T23:18:05.413249+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:06.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:06 vm02 bash[17457]: audit 2026-03-08T23:18:05.413249+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:06.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:06 vm02 bash[17457]: audit 2026-03-08T23:18:06.053078+0000 mon.a (mon.0) 331 : audit [DBG] from='client.? 192.168.123.102:0/1079901665' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:18:06.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:06 vm02 bash[17457]: audit 2026-03-08T23:18:06.053078+0000 mon.a (mon.0) 331 : audit [DBG] from='client.? 192.168.123.102:0/1079901665' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:18:06.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:06 vm10 bash[20034]: audit 2026-03-08T23:18:05.407143+0000 mon.a (mon.0) 327 : audit [INF] from='client.? 192.168.123.102:0/2312310959' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "329d7c16-85bb-4531-9c68-b1e468e49038"}]: dispatch 2026-03-08T23:18:06.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:06 vm10 bash[20034]: audit 2026-03-08T23:18:05.407143+0000 mon.a (mon.0) 327 : audit [INF] from='client.? 192.168.123.102:0/2312310959' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "329d7c16-85bb-4531-9c68-b1e468e49038"}]: dispatch 2026-03-08T23:18:06.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:06 vm10 bash[20034]: audit 2026-03-08T23:18:05.409700+0000 mon.a (mon.0) 328 : audit [INF] from='client.? 192.168.123.102:0/2312310959' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "329d7c16-85bb-4531-9c68-b1e468e49038"}]': finished 2026-03-08T23:18:06.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:06 vm10 bash[20034]: audit 2026-03-08T23:18:05.409700+0000 mon.a (mon.0) 328 : audit [INF] from='client.? 192.168.123.102:0/2312310959' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "329d7c16-85bb-4531-9c68-b1e468e49038"}]': finished 2026-03-08T23:18:06.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:06 vm10 bash[20034]: cluster 2026-03-08T23:18:05.413014+0000 mon.a (mon.0) 329 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-08T23:18:06.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:06 vm10 bash[20034]: cluster 2026-03-08T23:18:05.413014+0000 mon.a (mon.0) 329 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-08T23:18:06.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:06 vm10 bash[20034]: audit 2026-03-08T23:18:05.413249+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:06.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:06 vm10 bash[20034]: audit 2026-03-08T23:18:05.413249+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:06.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:06 vm10 bash[20034]: audit 2026-03-08T23:18:06.053078+0000 mon.a (mon.0) 331 : audit [DBG] from='client.? 192.168.123.102:0/1079901665' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:18:06.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:06 vm10 bash[20034]: audit 2026-03-08T23:18:06.053078+0000 mon.a (mon.0) 331 : audit [DBG] from='client.? 192.168.123.102:0/1079901665' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:18:07.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:07 vm04 bash[19918]: cluster 2026-03-08T23:18:06.185685+0000 mgr.x (mgr.14150) 103 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:07.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:07 vm04 bash[19918]: cluster 2026-03-08T23:18:06.185685+0000 mgr.x (mgr.14150) 103 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:07.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:07 vm02 bash[17457]: cluster 2026-03-08T23:18:06.185685+0000 mgr.x (mgr.14150) 103 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:07.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:07 vm02 bash[17457]: cluster 2026-03-08T23:18:06.185685+0000 mgr.x (mgr.14150) 103 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:07.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:07 vm10 bash[20034]: cluster 2026-03-08T23:18:06.185685+0000 mgr.x (mgr.14150) 103 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:07.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:07 vm10 bash[20034]: cluster 2026-03-08T23:18:06.185685+0000 mgr.x (mgr.14150) 103 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:09.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:09 vm04 bash[19918]: cluster 2026-03-08T23:18:08.185981+0000 mgr.x (mgr.14150) 104 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:09 vm04 bash[19918]: cluster 2026-03-08T23:18:08.185981+0000 mgr.x (mgr.14150) 104 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:09.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:09 vm02 bash[17457]: cluster 2026-03-08T23:18:08.185981+0000 mgr.x (mgr.14150) 104 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:09.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:09 vm02 bash[17457]: cluster 2026-03-08T23:18:08.185981+0000 mgr.x (mgr.14150) 104 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:09.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:09 vm10 bash[20034]: cluster 2026-03-08T23:18:08.185981+0000 mgr.x (mgr.14150) 104 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:09.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:09 vm10 bash[20034]: cluster 2026-03-08T23:18:08.185981+0000 mgr.x (mgr.14150) 104 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:11.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:11 vm04 bash[19918]: cluster 2026-03-08T23:18:10.186232+0000 mgr.x (mgr.14150) 105 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:11 vm04 bash[19918]: cluster 2026-03-08T23:18:10.186232+0000 mgr.x (mgr.14150) 105 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:11.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:11 vm02 bash[17457]: cluster 2026-03-08T23:18:10.186232+0000 mgr.x (mgr.14150) 105 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:11.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:11 vm02 bash[17457]: cluster 2026-03-08T23:18:10.186232+0000 mgr.x (mgr.14150) 105 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:11.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:11 vm10 bash[20034]: cluster 2026-03-08T23:18:10.186232+0000 mgr.x (mgr.14150) 105 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:11.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:11 vm10 bash[20034]: cluster 2026-03-08T23:18:10.186232+0000 mgr.x (mgr.14150) 105 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:13.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:13 vm04 bash[19918]: cluster 2026-03-08T23:18:12.186485+0000 mgr.x (mgr.14150) 106 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:13.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:13 vm04 bash[19918]: cluster 2026-03-08T23:18:12.186485+0000 mgr.x (mgr.14150) 106 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:13.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:13 vm02 bash[17457]: cluster 2026-03-08T23:18:12.186485+0000 mgr.x (mgr.14150) 106 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:13.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:13 vm02 bash[17457]: cluster 2026-03-08T23:18:12.186485+0000 mgr.x (mgr.14150) 106 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:13.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:13 vm10 bash[20034]: cluster 2026-03-08T23:18:12.186485+0000 mgr.x (mgr.14150) 106 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:13.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:13 vm10 bash[20034]: cluster 2026-03-08T23:18:12.186485+0000 mgr.x (mgr.14150) 106 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:14.616 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:14 vm02 bash[17457]: audit 2026-03-08T23:18:14.360241+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-08T23:18:14.616 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:14 vm02 bash[17457]: audit 2026-03-08T23:18:14.360241+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-08T23:18:14.616 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:14 vm02 bash[17457]: audit 2026-03-08T23:18:14.360921+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:14.616 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:14 vm02 bash[17457]: audit 2026-03-08T23:18:14.360921+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:14.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:14 vm04 bash[19918]: audit 2026-03-08T23:18:14.360241+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-08T23:18:14.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:14 vm04 bash[19918]: audit 2026-03-08T23:18:14.360241+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-08T23:18:14.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:14 vm04 bash[19918]: audit 2026-03-08T23:18:14.360921+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:14.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:14 vm04 bash[19918]: audit 2026-03-08T23:18:14.360921+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:14.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:14 vm10 bash[20034]: audit 2026-03-08T23:18:14.360241+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-08T23:18:14.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:14 vm10 bash[20034]: audit 2026-03-08T23:18:14.360241+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-08T23:18:14.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:14 vm10 bash[20034]: audit 2026-03-08T23:18:14.360921+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:14.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:14 vm10 bash[20034]: audit 2026-03-08T23:18:14.360921+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:15.181 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:15 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:18:15.181 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:18:15 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:18:15.181 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:18:15 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:18:15.447 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:18:15 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:18:15.447 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:18:15 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:18:15.447 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:15 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:18:15.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:15 vm04 bash[19918]: cluster 2026-03-08T23:18:14.186788+0000 mgr.x (mgr.14150) 107 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:15.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:15 vm04 bash[19918]: cluster 2026-03-08T23:18:14.186788+0000 mgr.x (mgr.14150) 107 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:15.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:15 vm04 bash[19918]: cephadm 2026-03-08T23:18:14.361447+0000 mgr.x (mgr.14150) 108 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-08T23:18:15.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:15 vm04 bash[19918]: cephadm 2026-03-08T23:18:14.361447+0000 mgr.x (mgr.14150) 108 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-08T23:18:15.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:15 vm04 bash[19918]: audit 2026-03-08T23:18:15.432327+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:15.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:15 vm04 bash[19918]: audit 2026-03-08T23:18:15.432327+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:15.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:15 vm04 bash[19918]: audit 2026-03-08T23:18:15.439888+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:15.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:15 vm04 bash[19918]: audit 2026-03-08T23:18:15.439888+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:15.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:15 vm04 bash[19918]: audit 2026-03-08T23:18:15.445776+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:15.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:15 vm04 bash[19918]: audit 2026-03-08T23:18:15.445776+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:15.902 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:15 vm02 bash[17457]: cluster 2026-03-08T23:18:14.186788+0000 mgr.x (mgr.14150) 107 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:15.902 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:15 vm02 bash[17457]: cluster 2026-03-08T23:18:14.186788+0000 mgr.x (mgr.14150) 107 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:15.902 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:15 vm02 bash[17457]: cephadm 2026-03-08T23:18:14.361447+0000 mgr.x (mgr.14150) 108 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-08T23:18:15.902 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:15 vm02 bash[17457]: cephadm 2026-03-08T23:18:14.361447+0000 mgr.x (mgr.14150) 108 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-08T23:18:15.902 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:15 vm02 bash[17457]: audit 2026-03-08T23:18:15.432327+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:15.902 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:15 vm02 bash[17457]: audit 2026-03-08T23:18:15.432327+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:15.902 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:15 vm02 bash[17457]: audit 2026-03-08T23:18:15.439888+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:15.902 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:15 vm02 bash[17457]: audit 2026-03-08T23:18:15.439888+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:15.902 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:15 vm02 bash[17457]: audit 2026-03-08T23:18:15.445776+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:15.902 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:15 vm02 bash[17457]: audit 2026-03-08T23:18:15.445776+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:15.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:15 vm10 bash[20034]: cluster 2026-03-08T23:18:14.186788+0000 mgr.x (mgr.14150) 107 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:15.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:15 vm10 bash[20034]: cluster 2026-03-08T23:18:14.186788+0000 mgr.x (mgr.14150) 107 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:15.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:15 vm10 bash[20034]: cephadm 2026-03-08T23:18:14.361447+0000 mgr.x (mgr.14150) 108 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-08T23:18:15.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:15 vm10 bash[20034]: cephadm 2026-03-08T23:18:14.361447+0000 mgr.x (mgr.14150) 108 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-08T23:18:15.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:15 vm10 bash[20034]: audit 2026-03-08T23:18:15.432327+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:15.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:15 vm10 bash[20034]: audit 2026-03-08T23:18:15.432327+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:15.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:15 vm10 bash[20034]: audit 2026-03-08T23:18:15.439888+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:15.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:15 vm10 bash[20034]: audit 2026-03-08T23:18:15.439888+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:15.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:15 vm10 bash[20034]: audit 2026-03-08T23:18:15.445776+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:15.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:15 vm10 bash[20034]: audit 2026-03-08T23:18:15.445776+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:17.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:17 vm04 bash[19918]: cluster 2026-03-08T23:18:16.187031+0000 mgr.x (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:17.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:17 vm04 bash[19918]: cluster 2026-03-08T23:18:16.187031+0000 mgr.x (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:17.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:17 vm02 bash[17457]: cluster 2026-03-08T23:18:16.187031+0000 mgr.x (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:17.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:17 vm02 bash[17457]: cluster 2026-03-08T23:18:16.187031+0000 mgr.x (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:17.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:17 vm10 bash[20034]: cluster 2026-03-08T23:18:16.187031+0000 mgr.x (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:17.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:17 vm10 bash[20034]: cluster 2026-03-08T23:18:16.187031+0000 mgr.x (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:19.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:19 vm02 bash[17457]: cluster 2026-03-08T23:18:18.187335+0000 mgr.x (mgr.14150) 110 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:19.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:19 vm02 bash[17457]: cluster 2026-03-08T23:18:18.187335+0000 mgr.x (mgr.14150) 110 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:19.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:19 vm02 bash[17457]: audit 2026-03-08T23:18:19.314690+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-08T23:18:19.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:19 vm02 bash[17457]: audit 2026-03-08T23:18:19.314690+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-08T23:18:19.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:19 vm04 bash[19918]: cluster 2026-03-08T23:18:18.187335+0000 mgr.x (mgr.14150) 110 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:19.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:19 vm04 bash[19918]: cluster 2026-03-08T23:18:18.187335+0000 mgr.x (mgr.14150) 110 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:19.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:19 vm04 bash[19918]: audit 2026-03-08T23:18:19.314690+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-08T23:18:19.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:19 vm04 bash[19918]: audit 2026-03-08T23:18:19.314690+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-08T23:18:19.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:19 vm10 bash[20034]: cluster 2026-03-08T23:18:18.187335+0000 mgr.x (mgr.14150) 110 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:19.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:19 vm10 bash[20034]: cluster 2026-03-08T23:18:18.187335+0000 mgr.x (mgr.14150) 110 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:19.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:19 vm10 bash[20034]: audit 2026-03-08T23:18:19.314690+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-08T23:18:19.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:19 vm10 bash[20034]: audit 2026-03-08T23:18:19.314690+0000 mon.a (mon.0) 337 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-08T23:18:20.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:20 vm04 bash[19918]: audit 2026-03-08T23:18:19.490171+0000 mon.a (mon.0) 338 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-08T23:18:20.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:20 vm04 bash[19918]: audit 2026-03-08T23:18:19.490171+0000 mon.a (mon.0) 338 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-08T23:18:20.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:20 vm04 bash[19918]: cluster 2026-03-08T23:18:19.492959+0000 mon.a (mon.0) 339 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-08T23:18:20.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:20 vm04 bash[19918]: cluster 2026-03-08T23:18:19.492959+0000 mon.a (mon.0) 339 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-08T23:18:20.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:20 vm04 bash[19918]: audit 2026-03-08T23:18:19.493874+0000 mon.a (mon.0) 340 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-08T23:18:20.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:20 vm04 bash[19918]: audit 2026-03-08T23:18:19.493874+0000 mon.a (mon.0) 340 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-08T23:18:20.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:20 vm04 bash[19918]: audit 2026-03-08T23:18:19.493998+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:20.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:20 vm04 bash[19918]: audit 2026-03-08T23:18:19.493998+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:20.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:20 vm04 bash[19918]: audit 2026-03-08T23:18:20.493287+0000 mon.a (mon.0) 342 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-08T23:18:20.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:20 vm04 bash[19918]: audit 2026-03-08T23:18:20.493287+0000 mon.a (mon.0) 342 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-08T23:18:20.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:20 vm02 bash[17457]: audit 2026-03-08T23:18:19.490171+0000 mon.a (mon.0) 338 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-08T23:18:20.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:20 vm02 bash[17457]: audit 2026-03-08T23:18:19.490171+0000 mon.a (mon.0) 338 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-08T23:18:20.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:20 vm02 bash[17457]: cluster 2026-03-08T23:18:19.492959+0000 mon.a (mon.0) 339 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-08T23:18:20.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:20 vm02 bash[17457]: cluster 2026-03-08T23:18:19.492959+0000 mon.a (mon.0) 339 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-08T23:18:20.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:20 vm02 bash[17457]: audit 2026-03-08T23:18:19.493874+0000 mon.a (mon.0) 340 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-08T23:18:20.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:20 vm02 bash[17457]: audit 2026-03-08T23:18:19.493874+0000 mon.a (mon.0) 340 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-08T23:18:20.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:20 vm02 bash[17457]: audit 2026-03-08T23:18:19.493998+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:20.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:20 vm02 bash[17457]: audit 2026-03-08T23:18:19.493998+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:20.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:20 vm02 bash[17457]: audit 2026-03-08T23:18:20.493287+0000 mon.a (mon.0) 342 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-08T23:18:20.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:20 vm02 bash[17457]: audit 2026-03-08T23:18:20.493287+0000 mon.a (mon.0) 342 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-08T23:18:20.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:20 vm10 bash[20034]: audit 2026-03-08T23:18:19.490171+0000 mon.a (mon.0) 338 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-08T23:18:20.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:20 vm10 bash[20034]: audit 2026-03-08T23:18:19.490171+0000 mon.a (mon.0) 338 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-08T23:18:20.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:20 vm10 bash[20034]: cluster 2026-03-08T23:18:19.492959+0000 mon.a (mon.0) 339 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-08T23:18:20.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:20 vm10 bash[20034]: cluster 2026-03-08T23:18:19.492959+0000 mon.a (mon.0) 339 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-08T23:18:20.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:20 vm10 bash[20034]: audit 2026-03-08T23:18:19.493874+0000 mon.a (mon.0) 340 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-08T23:18:20.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:20 vm10 bash[20034]: audit 2026-03-08T23:18:19.493874+0000 mon.a (mon.0) 340 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-08T23:18:20.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:20 vm10 bash[20034]: audit 2026-03-08T23:18:19.493998+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:20.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:20 vm10 bash[20034]: audit 2026-03-08T23:18:19.493998+0000 mon.a (mon.0) 341 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:20.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:20 vm10 bash[20034]: audit 2026-03-08T23:18:20.493287+0000 mon.a (mon.0) 342 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-08T23:18:20.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:20 vm10 bash[20034]: audit 2026-03-08T23:18:20.493287+0000 mon.a (mon.0) 342 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-08T23:18:21.769 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:21 vm02 bash[17457]: cluster 2026-03-08T23:18:20.187564+0000 mgr.x (mgr.14150) 111 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:21.769 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:21 vm02 bash[17457]: cluster 2026-03-08T23:18:20.187564+0000 mgr.x (mgr.14150) 111 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:21.769 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:21 vm02 bash[17457]: cluster 2026-03-08T23:18:20.496268+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-08T23:18:21.769 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:21 vm02 bash[17457]: cluster 2026-03-08T23:18:20.496268+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-08T23:18:21.769 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:21 vm02 bash[17457]: audit 2026-03-08T23:18:20.497142+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.769 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:21 vm02 bash[17457]: audit 2026-03-08T23:18:20.497142+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.770 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:21 vm02 bash[17457]: audit 2026-03-08T23:18:20.498969+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.770 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:21 vm02 bash[17457]: audit 2026-03-08T23:18:20.498969+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.770 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:21 vm02 bash[17457]: audit 2026-03-08T23:18:21.498949+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.770 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:21 vm02 bash[17457]: audit 2026-03-08T23:18:21.498949+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:21 vm04 bash[19918]: cluster 2026-03-08T23:18:20.187564+0000 mgr.x (mgr.14150) 111 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:21.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:21 vm04 bash[19918]: cluster 2026-03-08T23:18:20.187564+0000 mgr.x (mgr.14150) 111 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:21.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:21 vm04 bash[19918]: cluster 2026-03-08T23:18:20.496268+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-08T23:18:21.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:21 vm04 bash[19918]: cluster 2026-03-08T23:18:20.496268+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-08T23:18:21.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:21 vm04 bash[19918]: audit 2026-03-08T23:18:20.497142+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:21 vm04 bash[19918]: audit 2026-03-08T23:18:20.497142+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:21 vm04 bash[19918]: audit 2026-03-08T23:18:20.498969+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:21 vm04 bash[19918]: audit 2026-03-08T23:18:20.498969+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:21 vm04 bash[19918]: audit 2026-03-08T23:18:21.498949+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:21 vm04 bash[19918]: audit 2026-03-08T23:18:21.498949+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:21 vm10 bash[20034]: cluster 2026-03-08T23:18:20.187564+0000 mgr.x (mgr.14150) 111 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:21.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:21 vm10 bash[20034]: cluster 2026-03-08T23:18:20.187564+0000 mgr.x (mgr.14150) 111 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:21.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:21 vm10 bash[20034]: cluster 2026-03-08T23:18:20.496268+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-08T23:18:21.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:21 vm10 bash[20034]: cluster 2026-03-08T23:18:20.496268+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-08T23:18:21.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:21 vm10 bash[20034]: audit 2026-03-08T23:18:20.497142+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:21 vm10 bash[20034]: audit 2026-03-08T23:18:20.497142+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:21 vm10 bash[20034]: audit 2026-03-08T23:18:20.498969+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:21 vm10 bash[20034]: audit 2026-03-08T23:18:20.498969+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:21 vm10 bash[20034]: audit 2026-03-08T23:18:21.498949+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:21.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:21 vm10 bash[20034]: audit 2026-03-08T23:18:21.498949+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:22.758 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: cluster 2026-03-08T23:18:20.300654+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:18:22.758 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: cluster 2026-03-08T23:18:20.300654+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:18:22.758 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: cluster 2026-03-08T23:18:20.300740+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:18:22.759 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: cluster 2026-03-08T23:18:20.300740+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:18:22.759 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: audit 2026-03-08T23:18:21.614445+0000 mon.a (mon.0) 347 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' 2026-03-08T23:18:22.759 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: audit 2026-03-08T23:18:21.614445+0000 mon.a (mon.0) 347 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' 2026-03-08T23:18:22.759 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: audit 2026-03-08T23:18:21.724075+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.759 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: audit 2026-03-08T23:18:21.724075+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.759 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: audit 2026-03-08T23:18:21.732157+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.759 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: audit 2026-03-08T23:18:21.732157+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.759 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: audit 2026-03-08T23:18:21.733151+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:22.759 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: audit 2026-03-08T23:18:21.733151+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:22.759 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: audit 2026-03-08T23:18:21.733980+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:22.759 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: audit 2026-03-08T23:18:21.733980+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:22.759 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: audit 2026-03-08T23:18:21.740026+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.759 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: audit 2026-03-08T23:18:21.740026+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.759 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: audit 2026-03-08T23:18:22.499205+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:22.759 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:22 vm02 bash[17457]: audit 2026-03-08T23:18:22.499205+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:22.841 INFO:teuthology.orchestra.run.vm02.stdout:Created osd(s) 1 on host 'vm02' 2026-03-08T23:18:22.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: cluster 2026-03-08T23:18:20.300654+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:18:22.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: cluster 2026-03-08T23:18:20.300654+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:18:22.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: cluster 2026-03-08T23:18:20.300740+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:18:22.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: cluster 2026-03-08T23:18:20.300740+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:18:22.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: audit 2026-03-08T23:18:21.614445+0000 mon.a (mon.0) 347 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' 2026-03-08T23:18:22.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: audit 2026-03-08T23:18:21.614445+0000 mon.a (mon.0) 347 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' 2026-03-08T23:18:22.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: audit 2026-03-08T23:18:21.724075+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: audit 2026-03-08T23:18:21.724075+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: audit 2026-03-08T23:18:21.732157+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: audit 2026-03-08T23:18:21.732157+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: audit 2026-03-08T23:18:21.733151+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:22.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: audit 2026-03-08T23:18:21.733151+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:22.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: audit 2026-03-08T23:18:21.733980+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:22.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: audit 2026-03-08T23:18:21.733980+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:22.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: audit 2026-03-08T23:18:21.740026+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: audit 2026-03-08T23:18:21.740026+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: audit 2026-03-08T23:18:22.499205+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:22.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:22 vm10 bash[20034]: audit 2026-03-08T23:18:22.499205+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:22.938 DEBUG:teuthology.orchestra.run.vm02:osd.1> sudo journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.1.service 2026-03-08T23:18:22.939 INFO:tasks.cephadm:Deploying osd.2 on vm04 with /dev/vde... 2026-03-08T23:18:22.939 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- lvm zap /dev/vde 2026-03-08T23:18:22.945 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: cluster 2026-03-08T23:18:20.300654+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:18:22.945 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: cluster 2026-03-08T23:18:20.300654+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:18:22.945 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: cluster 2026-03-08T23:18:20.300740+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:18:22.945 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: cluster 2026-03-08T23:18:20.300740+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:18:22.945 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: audit 2026-03-08T23:18:21.614445+0000 mon.a (mon.0) 347 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' 2026-03-08T23:18:22.945 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: audit 2026-03-08T23:18:21.614445+0000 mon.a (mon.0) 347 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986]' entity='osd.1' 2026-03-08T23:18:22.945 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: audit 2026-03-08T23:18:21.724075+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.945 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: audit 2026-03-08T23:18:21.724075+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.945 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: audit 2026-03-08T23:18:21.732157+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.945 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: audit 2026-03-08T23:18:21.732157+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.945 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: audit 2026-03-08T23:18:21.733151+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:22.945 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: audit 2026-03-08T23:18:21.733151+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:22.945 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: audit 2026-03-08T23:18:21.733980+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:22.945 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: audit 2026-03-08T23:18:21.733980+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:22.945 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: audit 2026-03-08T23:18:21.740026+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.945 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: audit 2026-03-08T23:18:21.740026+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:22.946 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: audit 2026-03-08T23:18:22.499205+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:22.946 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:22 vm04 bash[19918]: audit 2026-03-08T23:18:22.499205+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:23 vm02 bash[17457]: cluster 2026-03-08T23:18:22.187827+0000 mgr.x (mgr.14150) 112 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:23 vm02 bash[17457]: cluster 2026-03-08T23:18:22.187827+0000 mgr.x (mgr.14150) 112 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:23 vm02 bash[17457]: cluster 2026-03-08T23:18:22.623135+0000 mon.a (mon.0) 354 : cluster [INF] osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986] boot 2026-03-08T23:18:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:23 vm02 bash[17457]: cluster 2026-03-08T23:18:22.623135+0000 mon.a (mon.0) 354 : cluster [INF] osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986] boot 2026-03-08T23:18:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:23 vm02 bash[17457]: cluster 2026-03-08T23:18:22.623208+0000 mon.a (mon.0) 355 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-08T23:18:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:23 vm02 bash[17457]: cluster 2026-03-08T23:18:22.623208+0000 mon.a (mon.0) 355 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-08T23:18:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:23 vm02 bash[17457]: audit 2026-03-08T23:18:22.624771+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:23 vm02 bash[17457]: audit 2026-03-08T23:18:22.624771+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:23 vm02 bash[17457]: audit 2026-03-08T23:18:22.819640+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:23 vm02 bash[17457]: audit 2026-03-08T23:18:22.819640+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:23 vm02 bash[17457]: audit 2026-03-08T23:18:22.825484+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:23 vm02 bash[17457]: audit 2026-03-08T23:18:22.825484+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:23 vm02 bash[17457]: audit 2026-03-08T23:18:22.830817+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:23 vm02 bash[17457]: audit 2026-03-08T23:18:22.830817+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:23.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:23 vm10 bash[20034]: cluster 2026-03-08T23:18:22.187827+0000 mgr.x (mgr.14150) 112 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:23.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:23 vm10 bash[20034]: cluster 2026-03-08T23:18:22.187827+0000 mgr.x (mgr.14150) 112 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:23.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:23 vm10 bash[20034]: cluster 2026-03-08T23:18:22.623135+0000 mon.a (mon.0) 354 : cluster [INF] osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986] boot 2026-03-08T23:18:23.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:23 vm10 bash[20034]: cluster 2026-03-08T23:18:22.623135+0000 mon.a (mon.0) 354 : cluster [INF] osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986] boot 2026-03-08T23:18:23.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:23 vm10 bash[20034]: cluster 2026-03-08T23:18:22.623208+0000 mon.a (mon.0) 355 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-08T23:18:23.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:23 vm10 bash[20034]: cluster 2026-03-08T23:18:22.623208+0000 mon.a (mon.0) 355 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-08T23:18:23.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:23 vm10 bash[20034]: audit 2026-03-08T23:18:22.624771+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:23.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:23 vm10 bash[20034]: audit 2026-03-08T23:18:22.624771+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:23.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:23 vm10 bash[20034]: audit 2026-03-08T23:18:22.819640+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:23.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:23 vm10 bash[20034]: audit 2026-03-08T23:18:22.819640+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:23.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:23 vm10 bash[20034]: audit 2026-03-08T23:18:22.825484+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:23.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:23 vm10 bash[20034]: audit 2026-03-08T23:18:22.825484+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:23.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:23 vm10 bash[20034]: audit 2026-03-08T23:18:22.830817+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:23.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:23 vm10 bash[20034]: audit 2026-03-08T23:18:22.830817+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:24.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:23 vm04 bash[19918]: cluster 2026-03-08T23:18:22.187827+0000 mgr.x (mgr.14150) 112 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:24.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:23 vm04 bash[19918]: cluster 2026-03-08T23:18:22.187827+0000 mgr.x (mgr.14150) 112 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-08T23:18:24.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:23 vm04 bash[19918]: cluster 2026-03-08T23:18:22.623135+0000 mon.a (mon.0) 354 : cluster [INF] osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986] boot 2026-03-08T23:18:24.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:23 vm04 bash[19918]: cluster 2026-03-08T23:18:22.623135+0000 mon.a (mon.0) 354 : cluster [INF] osd.1 [v2:192.168.123.102:6810/2405858986,v1:192.168.123.102:6811/2405858986] boot 2026-03-08T23:18:24.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:23 vm04 bash[19918]: cluster 2026-03-08T23:18:22.623208+0000 mon.a (mon.0) 355 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-08T23:18:24.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:23 vm04 bash[19918]: cluster 2026-03-08T23:18:22.623208+0000 mon.a (mon.0) 355 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-08T23:18:24.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:23 vm04 bash[19918]: audit 2026-03-08T23:18:22.624771+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:24.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:23 vm04 bash[19918]: audit 2026-03-08T23:18:22.624771+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:18:24.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:23 vm04 bash[19918]: audit 2026-03-08T23:18:22.819640+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:24.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:23 vm04 bash[19918]: audit 2026-03-08T23:18:22.819640+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:24.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:23 vm04 bash[19918]: audit 2026-03-08T23:18:22.825484+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:24.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:23 vm04 bash[19918]: audit 2026-03-08T23:18:22.825484+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:24.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:23 vm04 bash[19918]: audit 2026-03-08T23:18:22.830817+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:24.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:23 vm04 bash[19918]: audit 2026-03-08T23:18:22.830817+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:25.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:24 vm04 bash[19918]: cluster 2026-03-08T23:18:23.837100+0000 mon.a (mon.0) 360 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-08T23:18:25.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:24 vm04 bash[19918]: cluster 2026-03-08T23:18:23.837100+0000 mon.a (mon.0) 360 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-08T23:18:25.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:24 vm02 bash[17457]: cluster 2026-03-08T23:18:23.837100+0000 mon.a (mon.0) 360 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-08T23:18:25.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:24 vm02 bash[17457]: cluster 2026-03-08T23:18:23.837100+0000 mon.a (mon.0) 360 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-08T23:18:25.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:24 vm10 bash[20034]: cluster 2026-03-08T23:18:23.837100+0000 mon.a (mon.0) 360 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-08T23:18:25.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:24 vm10 bash[20034]: cluster 2026-03-08T23:18:23.837100+0000 mon.a (mon.0) 360 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-08T23:18:26.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:25 vm04 bash[19918]: cluster 2026-03-08T23:18:24.188061+0000 mgr.x (mgr.14150) 113 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:26.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:25 vm04 bash[19918]: cluster 2026-03-08T23:18:24.188061+0000 mgr.x (mgr.14150) 113 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:26.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:25 vm02 bash[17457]: cluster 2026-03-08T23:18:24.188061+0000 mgr.x (mgr.14150) 113 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:26.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:25 vm02 bash[17457]: cluster 2026-03-08T23:18:24.188061+0000 mgr.x (mgr.14150) 113 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:26.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:25 vm10 bash[20034]: cluster 2026-03-08T23:18:24.188061+0000 mgr.x (mgr.14150) 113 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:26.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:25 vm10 bash[20034]: cluster 2026-03-08T23:18:24.188061+0000 mgr.x (mgr.14150) 113 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:26.549 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.b/config 2026-03-08T23:18:27.372 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-08T23:18:27.387 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph orch daemon add osd vm04:/dev/vde 2026-03-08T23:18:27.849 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:27 vm04 bash[19918]: cluster 2026-03-08T23:18:26.188289+0000 mgr.x (mgr.14150) 114 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:27.849 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:27 vm04 bash[19918]: cluster 2026-03-08T23:18:26.188289+0000 mgr.x (mgr.14150) 114 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:27.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:27 vm02 bash[17457]: cluster 2026-03-08T23:18:26.188289+0000 mgr.x (mgr.14150) 114 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:27.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:27 vm02 bash[17457]: cluster 2026-03-08T23:18:26.188289+0000 mgr.x (mgr.14150) 114 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:28.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:27 vm10 bash[20034]: cluster 2026-03-08T23:18:26.188289+0000 mgr.x (mgr.14150) 114 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:28.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:27 vm10 bash[20034]: cluster 2026-03-08T23:18:26.188289+0000 mgr.x (mgr.14150) 114 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: cluster 2026-03-08T23:18:28.188525+0000 mgr.x (mgr.14150) 115 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: cluster 2026-03-08T23:18:28.188525+0000 mgr.x (mgr.14150) 115 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: cephadm 2026-03-08T23:18:28.459019+0000 mgr.x (mgr.14150) 116 : cephadm [INF] Detected new or changed devices on vm02 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: cephadm 2026-03-08T23:18:28.459019+0000 mgr.x (mgr.14150) 116 : cephadm [INF] Detected new or changed devices on vm02 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: audit 2026-03-08T23:18:28.464655+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: audit 2026-03-08T23:18:28.464655+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: audit 2026-03-08T23:18:28.469244+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: audit 2026-03-08T23:18:28.469244+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: audit 2026-03-08T23:18:28.470271+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: audit 2026-03-08T23:18:28.470271+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: audit 2026-03-08T23:18:28.470793+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: audit 2026-03-08T23:18:28.470793+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: cephadm 2026-03-08T23:18:28.471084+0000 mgr.x (mgr.14150) 117 : cephadm [INF] Adjusting osd_memory_target on vm02 to 227.8M 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: cephadm 2026-03-08T23:18:28.471084+0000 mgr.x (mgr.14150) 117 : cephadm [INF] Adjusting osd_memory_target on vm02 to 227.8M 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: cephadm 2026-03-08T23:18:28.471431+0000 mgr.x (mgr.14150) 118 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: cephadm 2026-03-08T23:18:28.471431+0000 mgr.x (mgr.14150) 118 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: audit 2026-03-08T23:18:28.471724+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: audit 2026-03-08T23:18:28.471724+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: audit 2026-03-08T23:18:28.472090+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: audit 2026-03-08T23:18:28.472090+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: audit 2026-03-08T23:18:28.476000+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:29 vm04 bash[19918]: audit 2026-03-08T23:18:28.476000+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: cluster 2026-03-08T23:18:28.188525+0000 mgr.x (mgr.14150) 115 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: cluster 2026-03-08T23:18:28.188525+0000 mgr.x (mgr.14150) 115 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: cephadm 2026-03-08T23:18:28.459019+0000 mgr.x (mgr.14150) 116 : cephadm [INF] Detected new or changed devices on vm02 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: cephadm 2026-03-08T23:18:28.459019+0000 mgr.x (mgr.14150) 116 : cephadm [INF] Detected new or changed devices on vm02 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: audit 2026-03-08T23:18:28.464655+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: audit 2026-03-08T23:18:28.464655+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: audit 2026-03-08T23:18:28.469244+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: audit 2026-03-08T23:18:28.469244+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: audit 2026-03-08T23:18:28.470271+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: audit 2026-03-08T23:18:28.470271+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: audit 2026-03-08T23:18:28.470793+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: audit 2026-03-08T23:18:28.470793+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: cephadm 2026-03-08T23:18:28.471084+0000 mgr.x (mgr.14150) 117 : cephadm [INF] Adjusting osd_memory_target on vm02 to 227.8M 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: cephadm 2026-03-08T23:18:28.471084+0000 mgr.x (mgr.14150) 117 : cephadm [INF] Adjusting osd_memory_target on vm02 to 227.8M 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: cephadm 2026-03-08T23:18:28.471431+0000 mgr.x (mgr.14150) 118 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: cephadm 2026-03-08T23:18:28.471431+0000 mgr.x (mgr.14150) 118 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: audit 2026-03-08T23:18:28.471724+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: audit 2026-03-08T23:18:28.471724+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: audit 2026-03-08T23:18:28.472090+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: audit 2026-03-08T23:18:28.472090+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: audit 2026-03-08T23:18:28.476000+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:29 vm02 bash[17457]: audit 2026-03-08T23:18:28.476000+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: cluster 2026-03-08T23:18:28.188525+0000 mgr.x (mgr.14150) 115 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: cluster 2026-03-08T23:18:28.188525+0000 mgr.x (mgr.14150) 115 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: cephadm 2026-03-08T23:18:28.459019+0000 mgr.x (mgr.14150) 116 : cephadm [INF] Detected new or changed devices on vm02 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: cephadm 2026-03-08T23:18:28.459019+0000 mgr.x (mgr.14150) 116 : cephadm [INF] Detected new or changed devices on vm02 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: audit 2026-03-08T23:18:28.464655+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: audit 2026-03-08T23:18:28.464655+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: audit 2026-03-08T23:18:28.469244+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: audit 2026-03-08T23:18:28.469244+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: audit 2026-03-08T23:18:28.470271+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: audit 2026-03-08T23:18:28.470271+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: audit 2026-03-08T23:18:28.470793+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: audit 2026-03-08T23:18:28.470793+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: cephadm 2026-03-08T23:18:28.471084+0000 mgr.x (mgr.14150) 117 : cephadm [INF] Adjusting osd_memory_target on vm02 to 227.8M 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: cephadm 2026-03-08T23:18:28.471084+0000 mgr.x (mgr.14150) 117 : cephadm [INF] Adjusting osd_memory_target on vm02 to 227.8M 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: cephadm 2026-03-08T23:18:28.471431+0000 mgr.x (mgr.14150) 118 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: cephadm 2026-03-08T23:18:28.471431+0000 mgr.x (mgr.14150) 118 : cephadm [WRN] Unable to set osd_memory_target on vm02 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: audit 2026-03-08T23:18:28.471724+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: audit 2026-03-08T23:18:28.471724+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: audit 2026-03-08T23:18:28.472090+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:29.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: audit 2026-03-08T23:18:28.472090+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:29.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: audit 2026-03-08T23:18:28.476000+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:29.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:29 vm10 bash[20034]: audit 2026-03-08T23:18:28.476000+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:31.033 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.b/config 2026-03-08T23:18:31.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:31 vm04 bash[19918]: cluster 2026-03-08T23:18:30.188799+0000 mgr.x (mgr.14150) 119 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:31 vm04 bash[19918]: cluster 2026-03-08T23:18:30.188799+0000 mgr.x (mgr.14150) 119 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:31 vm04 bash[19918]: audit 2026-03-08T23:18:31.304695+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:18:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:31 vm04 bash[19918]: audit 2026-03-08T23:18:31.304695+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:18:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:31 vm04 bash[19918]: audit 2026-03-08T23:18:31.305831+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:18:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:31 vm04 bash[19918]: audit 2026-03-08T23:18:31.305831+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:18:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:31 vm04 bash[19918]: audit 2026-03-08T23:18:31.306198+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:31 vm04 bash[19918]: audit 2026-03-08T23:18:31.306198+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:31 vm02 bash[17457]: cluster 2026-03-08T23:18:30.188799+0000 mgr.x (mgr.14150) 119 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:31 vm02 bash[17457]: cluster 2026-03-08T23:18:30.188799+0000 mgr.x (mgr.14150) 119 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:31 vm02 bash[17457]: audit 2026-03-08T23:18:31.304695+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:18:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:31 vm02 bash[17457]: audit 2026-03-08T23:18:31.304695+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:18:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:31 vm02 bash[17457]: audit 2026-03-08T23:18:31.305831+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:18:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:31 vm02 bash[17457]: audit 2026-03-08T23:18:31.305831+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:18:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:31 vm02 bash[17457]: audit 2026-03-08T23:18:31.306198+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:31 vm02 bash[17457]: audit 2026-03-08T23:18:31.306198+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:31.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:31 vm10 bash[20034]: cluster 2026-03-08T23:18:30.188799+0000 mgr.x (mgr.14150) 119 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:31.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:31 vm10 bash[20034]: cluster 2026-03-08T23:18:30.188799+0000 mgr.x (mgr.14150) 119 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:31.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:31 vm10 bash[20034]: audit 2026-03-08T23:18:31.304695+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:18:31.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:31 vm10 bash[20034]: audit 2026-03-08T23:18:31.304695+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:18:31.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:31 vm10 bash[20034]: audit 2026-03-08T23:18:31.305831+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:18:31.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:31 vm10 bash[20034]: audit 2026-03-08T23:18:31.305831+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:18:31.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:31 vm10 bash[20034]: audit 2026-03-08T23:18:31.306198+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:31.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:31 vm10 bash[20034]: audit 2026-03-08T23:18:31.306198+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:32.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:32 vm04 bash[19918]: audit 2026-03-08T23:18:31.303149+0000 mgr.x (mgr.14150) 120 : audit [DBG] from='client.24146 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:18:32.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:32 vm04 bash[19918]: audit 2026-03-08T23:18:31.303149+0000 mgr.x (mgr.14150) 120 : audit [DBG] from='client.24146 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:18:32.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:32 vm02 bash[17457]: audit 2026-03-08T23:18:31.303149+0000 mgr.x (mgr.14150) 120 : audit [DBG] from='client.24146 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:18:32.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:32 vm02 bash[17457]: audit 2026-03-08T23:18:31.303149+0000 mgr.x (mgr.14150) 120 : audit [DBG] from='client.24146 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:18:32.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:32 vm10 bash[20034]: audit 2026-03-08T23:18:31.303149+0000 mgr.x (mgr.14150) 120 : audit [DBG] from='client.24146 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:18:32.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:32 vm10 bash[20034]: audit 2026-03-08T23:18:31.303149+0000 mgr.x (mgr.14150) 120 : audit [DBG] from='client.24146 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:18:33.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:33 vm04 bash[19918]: cluster 2026-03-08T23:18:32.189021+0000 mgr.x (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:33.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:33 vm04 bash[19918]: cluster 2026-03-08T23:18:32.189021+0000 mgr.x (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:33 vm02 bash[17457]: cluster 2026-03-08T23:18:32.189021+0000 mgr.x (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:33 vm02 bash[17457]: cluster 2026-03-08T23:18:32.189021+0000 mgr.x (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:33.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:33 vm10 bash[20034]: cluster 2026-03-08T23:18:32.189021+0000 mgr.x (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:33.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:33 vm10 bash[20034]: cluster 2026-03-08T23:18:32.189021+0000 mgr.x (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:35 vm04 bash[19918]: cluster 2026-03-08T23:18:34.189316+0000 mgr.x (mgr.14150) 122 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:35 vm04 bash[19918]: cluster 2026-03-08T23:18:34.189316+0000 mgr.x (mgr.14150) 122 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:35 vm02 bash[17457]: cluster 2026-03-08T23:18:34.189316+0000 mgr.x (mgr.14150) 122 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:35 vm02 bash[17457]: cluster 2026-03-08T23:18:34.189316+0000 mgr.x (mgr.14150) 122 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:35 vm10 bash[20034]: cluster 2026-03-08T23:18:34.189316+0000 mgr.x (mgr.14150) 122 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:35 vm10 bash[20034]: cluster 2026-03-08T23:18:34.189316+0000 mgr.x (mgr.14150) 122 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:37 vm04 bash[19918]: cluster 2026-03-08T23:18:36.189553+0000 mgr.x (mgr.14150) 123 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:37 vm04 bash[19918]: cluster 2026-03-08T23:18:36.189553+0000 mgr.x (mgr.14150) 123 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:37 vm04 bash[19918]: audit 2026-03-08T23:18:36.678130+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.104:0/1025042582' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]: dispatch 2026-03-08T23:18:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:37 vm04 bash[19918]: audit 2026-03-08T23:18:36.678130+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.104:0/1025042582' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]: dispatch 2026-03-08T23:18:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:37 vm04 bash[19918]: audit 2026-03-08T23:18:36.678878+0000 mon.a (mon.0) 371 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]: dispatch 2026-03-08T23:18:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:37 vm04 bash[19918]: audit 2026-03-08T23:18:36.678878+0000 mon.a (mon.0) 371 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]: dispatch 2026-03-08T23:18:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:37 vm04 bash[19918]: audit 2026-03-08T23:18:36.681656+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]': finished 2026-03-08T23:18:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:37 vm04 bash[19918]: audit 2026-03-08T23:18:36.681656+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]': finished 2026-03-08T23:18:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:37 vm04 bash[19918]: cluster 2026-03-08T23:18:36.684564+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-08T23:18:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:37 vm04 bash[19918]: cluster 2026-03-08T23:18:36.684564+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-08T23:18:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:37 vm04 bash[19918]: audit 2026-03-08T23:18:36.684741+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:37 vm04 bash[19918]: audit 2026-03-08T23:18:36.684741+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:37 vm04 bash[19918]: audit 2026-03-08T23:18:37.327056+0000 mon.b (mon.2) 5 : audit [DBG] from='client.? 192.168.123.104:0/3732846233' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:18:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:37 vm04 bash[19918]: audit 2026-03-08T23:18:37.327056+0000 mon.b (mon.2) 5 : audit [DBG] from='client.? 192.168.123.104:0/3732846233' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:18:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:37 vm02 bash[17457]: cluster 2026-03-08T23:18:36.189553+0000 mgr.x (mgr.14150) 123 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:37 vm02 bash[17457]: cluster 2026-03-08T23:18:36.189553+0000 mgr.x (mgr.14150) 123 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:37 vm02 bash[17457]: audit 2026-03-08T23:18:36.678130+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.104:0/1025042582' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]: dispatch 2026-03-08T23:18:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:37 vm02 bash[17457]: audit 2026-03-08T23:18:36.678130+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.104:0/1025042582' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]: dispatch 2026-03-08T23:18:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:37 vm02 bash[17457]: audit 2026-03-08T23:18:36.678878+0000 mon.a (mon.0) 371 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]: dispatch 2026-03-08T23:18:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:37 vm02 bash[17457]: audit 2026-03-08T23:18:36.678878+0000 mon.a (mon.0) 371 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]: dispatch 2026-03-08T23:18:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:37 vm02 bash[17457]: audit 2026-03-08T23:18:36.681656+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]': finished 2026-03-08T23:18:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:37 vm02 bash[17457]: audit 2026-03-08T23:18:36.681656+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]': finished 2026-03-08T23:18:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:37 vm02 bash[17457]: cluster 2026-03-08T23:18:36.684564+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-08T23:18:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:37 vm02 bash[17457]: cluster 2026-03-08T23:18:36.684564+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-08T23:18:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:37 vm02 bash[17457]: audit 2026-03-08T23:18:36.684741+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:37 vm02 bash[17457]: audit 2026-03-08T23:18:36.684741+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:37 vm02 bash[17457]: audit 2026-03-08T23:18:37.327056+0000 mon.b (mon.2) 5 : audit [DBG] from='client.? 192.168.123.104:0/3732846233' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:18:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:37 vm02 bash[17457]: audit 2026-03-08T23:18:37.327056+0000 mon.b (mon.2) 5 : audit [DBG] from='client.? 192.168.123.104:0/3732846233' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:18:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:37 vm10 bash[20034]: cluster 2026-03-08T23:18:36.189553+0000 mgr.x (mgr.14150) 123 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:37 vm10 bash[20034]: cluster 2026-03-08T23:18:36.189553+0000 mgr.x (mgr.14150) 123 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:37 vm10 bash[20034]: audit 2026-03-08T23:18:36.678130+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.104:0/1025042582' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]: dispatch 2026-03-08T23:18:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:37 vm10 bash[20034]: audit 2026-03-08T23:18:36.678130+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.104:0/1025042582' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]: dispatch 2026-03-08T23:18:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:37 vm10 bash[20034]: audit 2026-03-08T23:18:36.678878+0000 mon.a (mon.0) 371 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]: dispatch 2026-03-08T23:18:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:37 vm10 bash[20034]: audit 2026-03-08T23:18:36.678878+0000 mon.a (mon.0) 371 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]: dispatch 2026-03-08T23:18:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:37 vm10 bash[20034]: audit 2026-03-08T23:18:36.681656+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]': finished 2026-03-08T23:18:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:37 vm10 bash[20034]: audit 2026-03-08T23:18:36.681656+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b"}]': finished 2026-03-08T23:18:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:37 vm10 bash[20034]: cluster 2026-03-08T23:18:36.684564+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-08T23:18:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:37 vm10 bash[20034]: cluster 2026-03-08T23:18:36.684564+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-08T23:18:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:37 vm10 bash[20034]: audit 2026-03-08T23:18:36.684741+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:37 vm10 bash[20034]: audit 2026-03-08T23:18:36.684741+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:37 vm10 bash[20034]: audit 2026-03-08T23:18:37.327056+0000 mon.b (mon.2) 5 : audit [DBG] from='client.? 192.168.123.104:0/3732846233' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:18:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:37 vm10 bash[20034]: audit 2026-03-08T23:18:37.327056+0000 mon.b (mon.2) 5 : audit [DBG] from='client.? 192.168.123.104:0/3732846233' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:18:39.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:39 vm04 bash[19918]: cluster 2026-03-08T23:18:38.189791+0000 mgr.x (mgr.14150) 124 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:39.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:39 vm04 bash[19918]: cluster 2026-03-08T23:18:38.189791+0000 mgr.x (mgr.14150) 124 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:39 vm02 bash[17457]: cluster 2026-03-08T23:18:38.189791+0000 mgr.x (mgr.14150) 124 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:39 vm02 bash[17457]: cluster 2026-03-08T23:18:38.189791+0000 mgr.x (mgr.14150) 124 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:39 vm10 bash[20034]: cluster 2026-03-08T23:18:38.189791+0000 mgr.x (mgr.14150) 124 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:39 vm10 bash[20034]: cluster 2026-03-08T23:18:38.189791+0000 mgr.x (mgr.14150) 124 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:41.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:41 vm04 bash[19918]: cluster 2026-03-08T23:18:40.189992+0000 mgr.x (mgr.14150) 125 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:41.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:41 vm04 bash[19918]: cluster 2026-03-08T23:18:40.189992+0000 mgr.x (mgr.14150) 125 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:41 vm02 bash[17457]: cluster 2026-03-08T23:18:40.189992+0000 mgr.x (mgr.14150) 125 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:41 vm02 bash[17457]: cluster 2026-03-08T23:18:40.189992+0000 mgr.x (mgr.14150) 125 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:41 vm10 bash[20034]: cluster 2026-03-08T23:18:40.189992+0000 mgr.x (mgr.14150) 125 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:41 vm10 bash[20034]: cluster 2026-03-08T23:18:40.189992+0000 mgr.x (mgr.14150) 125 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:43.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:43 vm04 bash[19918]: cluster 2026-03-08T23:18:42.190226+0000 mgr.x (mgr.14150) 126 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:43.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:43 vm04 bash[19918]: cluster 2026-03-08T23:18:42.190226+0000 mgr.x (mgr.14150) 126 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:43.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:43 vm02 bash[17457]: cluster 2026-03-08T23:18:42.190226+0000 mgr.x (mgr.14150) 126 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:43.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:43 vm02 bash[17457]: cluster 2026-03-08T23:18:42.190226+0000 mgr.x (mgr.14150) 126 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:43 vm10 bash[20034]: cluster 2026-03-08T23:18:42.190226+0000 mgr.x (mgr.14150) 126 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:43 vm10 bash[20034]: cluster 2026-03-08T23:18:42.190226+0000 mgr.x (mgr.14150) 126 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:45.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:45 vm04 bash[19918]: cluster 2026-03-08T23:18:44.190458+0000 mgr.x (mgr.14150) 127 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:45.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:45 vm04 bash[19918]: cluster 2026-03-08T23:18:44.190458+0000 mgr.x (mgr.14150) 127 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:45 vm02 bash[17457]: cluster 2026-03-08T23:18:44.190458+0000 mgr.x (mgr.14150) 127 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:45 vm02 bash[17457]: cluster 2026-03-08T23:18:44.190458+0000 mgr.x (mgr.14150) 127 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:45 vm10 bash[20034]: cluster 2026-03-08T23:18:44.190458+0000 mgr.x (mgr.14150) 127 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:45 vm10 bash[20034]: cluster 2026-03-08T23:18:44.190458+0000 mgr.x (mgr.14150) 127 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:46.807 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:46 vm04 bash[19918]: audit 2026-03-08T23:18:46.239474+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-08T23:18:46.807 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:46 vm04 bash[19918]: audit 2026-03-08T23:18:46.239474+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-08T23:18:46.807 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:46 vm04 bash[19918]: audit 2026-03-08T23:18:46.239974+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:46.807 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:46 vm04 bash[19918]: audit 2026-03-08T23:18:46.239974+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:46.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:46 vm02 bash[17457]: audit 2026-03-08T23:18:46.239474+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-08T23:18:46.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:46 vm02 bash[17457]: audit 2026-03-08T23:18:46.239474+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-08T23:18:46.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:46 vm02 bash[17457]: audit 2026-03-08T23:18:46.239974+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:46.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:46 vm02 bash[17457]: audit 2026-03-08T23:18:46.239974+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:46.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:46 vm10 bash[20034]: audit 2026-03-08T23:18:46.239474+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-08T23:18:46.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:46 vm10 bash[20034]: audit 2026-03-08T23:18:46.239474+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-08T23:18:46.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:46 vm10 bash[20034]: audit 2026-03-08T23:18:46.239974+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:46.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:46 vm10 bash[20034]: audit 2026-03-08T23:18:46.239974+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:47.057 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:47 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:18:47.347 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:47 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:18:47.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:47 vm04 bash[19918]: cluster 2026-03-08T23:18:46.190696+0000 mgr.x (mgr.14150) 128 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:47.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:47 vm04 bash[19918]: cluster 2026-03-08T23:18:46.190696+0000 mgr.x (mgr.14150) 128 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:47.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:47 vm04 bash[19918]: cephadm 2026-03-08T23:18:46.240342+0000 mgr.x (mgr.14150) 129 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-08T23:18:47.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:47 vm04 bash[19918]: cephadm 2026-03-08T23:18:46.240342+0000 mgr.x (mgr.14150) 129 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-08T23:18:47.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:47 vm04 bash[19918]: audit 2026-03-08T23:18:47.263977+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:47.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:47 vm04 bash[19918]: audit 2026-03-08T23:18:47.263977+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:47.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:47 vm04 bash[19918]: audit 2026-03-08T23:18:47.269676+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:47.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:47 vm04 bash[19918]: audit 2026-03-08T23:18:47.269676+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:47.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:47 vm04 bash[19918]: audit 2026-03-08T23:18:47.274132+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:47.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:47 vm04 bash[19918]: audit 2026-03-08T23:18:47.274132+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:47 vm02 bash[17457]: cluster 2026-03-08T23:18:46.190696+0000 mgr.x (mgr.14150) 128 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:47 vm02 bash[17457]: cluster 2026-03-08T23:18:46.190696+0000 mgr.x (mgr.14150) 128 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:47 vm02 bash[17457]: cephadm 2026-03-08T23:18:46.240342+0000 mgr.x (mgr.14150) 129 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-08T23:18:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:47 vm02 bash[17457]: cephadm 2026-03-08T23:18:46.240342+0000 mgr.x (mgr.14150) 129 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-08T23:18:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:47 vm02 bash[17457]: audit 2026-03-08T23:18:47.263977+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:47 vm02 bash[17457]: audit 2026-03-08T23:18:47.263977+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:47 vm02 bash[17457]: audit 2026-03-08T23:18:47.269676+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:47 vm02 bash[17457]: audit 2026-03-08T23:18:47.269676+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:47 vm02 bash[17457]: audit 2026-03-08T23:18:47.274132+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:47 vm02 bash[17457]: audit 2026-03-08T23:18:47.274132+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:47 vm10 bash[20034]: cluster 2026-03-08T23:18:46.190696+0000 mgr.x (mgr.14150) 128 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:47 vm10 bash[20034]: cluster 2026-03-08T23:18:46.190696+0000 mgr.x (mgr.14150) 128 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:47 vm10 bash[20034]: cephadm 2026-03-08T23:18:46.240342+0000 mgr.x (mgr.14150) 129 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-08T23:18:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:47 vm10 bash[20034]: cephadm 2026-03-08T23:18:46.240342+0000 mgr.x (mgr.14150) 129 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-08T23:18:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:47 vm10 bash[20034]: audit 2026-03-08T23:18:47.263977+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:47 vm10 bash[20034]: audit 2026-03-08T23:18:47.263977+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:47 vm10 bash[20034]: audit 2026-03-08T23:18:47.269676+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:47 vm10 bash[20034]: audit 2026-03-08T23:18:47.269676+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:47 vm10 bash[20034]: audit 2026-03-08T23:18:47.274132+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:47 vm10 bash[20034]: audit 2026-03-08T23:18:47.274132+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:49.854 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:49 vm04 bash[19918]: cluster 2026-03-08T23:18:48.190985+0000 mgr.x (mgr.14150) 130 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:49.854 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:49 vm04 bash[19918]: cluster 2026-03-08T23:18:48.190985+0000 mgr.x (mgr.14150) 130 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:49 vm02 bash[17457]: cluster 2026-03-08T23:18:48.190985+0000 mgr.x (mgr.14150) 130 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:49 vm02 bash[17457]: cluster 2026-03-08T23:18:48.190985+0000 mgr.x (mgr.14150) 130 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:49 vm10 bash[20034]: cluster 2026-03-08T23:18:48.190985+0000 mgr.x (mgr.14150) 130 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:49 vm10 bash[20034]: cluster 2026-03-08T23:18:48.190985+0000 mgr.x (mgr.14150) 130 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:51.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:51 vm04 bash[19918]: cluster 2026-03-08T23:18:50.191214+0000 mgr.x (mgr.14150) 131 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:51.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:51 vm04 bash[19918]: cluster 2026-03-08T23:18:50.191214+0000 mgr.x (mgr.14150) 131 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:51.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:51 vm04 bash[19918]: audit 2026-03-08T23:18:51.277836+0000 mon.b (mon.2) 6 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T23:18:51.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:51 vm04 bash[19918]: audit 2026-03-08T23:18:51.277836+0000 mon.b (mon.2) 6 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T23:18:51.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:51 vm04 bash[19918]: audit 2026-03-08T23:18:51.278514+0000 mon.a (mon.0) 380 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T23:18:51.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:51 vm04 bash[19918]: audit 2026-03-08T23:18:51.278514+0000 mon.a (mon.0) 380 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T23:18:51.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:51 vm02 bash[17457]: cluster 2026-03-08T23:18:50.191214+0000 mgr.x (mgr.14150) 131 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:51.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:51 vm02 bash[17457]: cluster 2026-03-08T23:18:50.191214+0000 mgr.x (mgr.14150) 131 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:51.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:51 vm02 bash[17457]: audit 2026-03-08T23:18:51.277836+0000 mon.b (mon.2) 6 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T23:18:51.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:51 vm02 bash[17457]: audit 2026-03-08T23:18:51.277836+0000 mon.b (mon.2) 6 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T23:18:51.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:51 vm02 bash[17457]: audit 2026-03-08T23:18:51.278514+0000 mon.a (mon.0) 380 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T23:18:51.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:51 vm02 bash[17457]: audit 2026-03-08T23:18:51.278514+0000 mon.a (mon.0) 380 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T23:18:51.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:51 vm10 bash[20034]: cluster 2026-03-08T23:18:50.191214+0000 mgr.x (mgr.14150) 131 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:51.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:51 vm10 bash[20034]: cluster 2026-03-08T23:18:50.191214+0000 mgr.x (mgr.14150) 131 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:51.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:51 vm10 bash[20034]: audit 2026-03-08T23:18:51.277836+0000 mon.b (mon.2) 6 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T23:18:51.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:51 vm10 bash[20034]: audit 2026-03-08T23:18:51.277836+0000 mon.b (mon.2) 6 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T23:18:51.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:51 vm10 bash[20034]: audit 2026-03-08T23:18:51.278514+0000 mon.a (mon.0) 380 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T23:18:51.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:51 vm10 bash[20034]: audit 2026-03-08T23:18:51.278514+0000 mon.a (mon.0) 380 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T23:18:52.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:52 vm04 bash[19918]: audit 2026-03-08T23:18:51.582109+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-08T23:18:52.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:52 vm04 bash[19918]: audit 2026-03-08T23:18:51.582109+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-08T23:18:52.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:52 vm04 bash[19918]: cluster 2026-03-08T23:18:51.585417+0000 mon.a (mon.0) 382 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-08T23:18:52.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:52 vm04 bash[19918]: cluster 2026-03-08T23:18:51.585417+0000 mon.a (mon.0) 382 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-08T23:18:52.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:52 vm04 bash[19918]: audit 2026-03-08T23:18:51.585723+0000 mon.b (mon.2) 7 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:18:52.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:52 vm04 bash[19918]: audit 2026-03-08T23:18:51.585723+0000 mon.b (mon.2) 7 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:18:52.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:52 vm04 bash[19918]: audit 2026-03-08T23:18:51.585787+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:52.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:52 vm04 bash[19918]: audit 2026-03-08T23:18:51.585787+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:52.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:52 vm04 bash[19918]: audit 2026-03-08T23:18:51.586331+0000 mon.a (mon.0) 384 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:18:52.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:52 vm04 bash[19918]: audit 2026-03-08T23:18:51.586331+0000 mon.a (mon.0) 384 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:18:52.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:52 vm02 bash[17457]: audit 2026-03-08T23:18:51.582109+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-08T23:18:52.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:52 vm02 bash[17457]: audit 2026-03-08T23:18:51.582109+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-08T23:18:52.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:52 vm02 bash[17457]: cluster 2026-03-08T23:18:51.585417+0000 mon.a (mon.0) 382 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-08T23:18:52.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:52 vm02 bash[17457]: cluster 2026-03-08T23:18:51.585417+0000 mon.a (mon.0) 382 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-08T23:18:52.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:52 vm02 bash[17457]: audit 2026-03-08T23:18:51.585723+0000 mon.b (mon.2) 7 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:18:52.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:52 vm02 bash[17457]: audit 2026-03-08T23:18:51.585723+0000 mon.b (mon.2) 7 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:18:52.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:52 vm02 bash[17457]: audit 2026-03-08T23:18:51.585787+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:52.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:52 vm02 bash[17457]: audit 2026-03-08T23:18:51.585787+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:52.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:52 vm02 bash[17457]: audit 2026-03-08T23:18:51.586331+0000 mon.a (mon.0) 384 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:18:52.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:52 vm02 bash[17457]: audit 2026-03-08T23:18:51.586331+0000 mon.a (mon.0) 384 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:18:52.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:52 vm10 bash[20034]: audit 2026-03-08T23:18:51.582109+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-08T23:18:52.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:52 vm10 bash[20034]: audit 2026-03-08T23:18:51.582109+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-08T23:18:52.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:52 vm10 bash[20034]: cluster 2026-03-08T23:18:51.585417+0000 mon.a (mon.0) 382 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-08T23:18:52.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:52 vm10 bash[20034]: cluster 2026-03-08T23:18:51.585417+0000 mon.a (mon.0) 382 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-08T23:18:52.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:52 vm10 bash[20034]: audit 2026-03-08T23:18:51.585723+0000 mon.b (mon.2) 7 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:18:52.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:52 vm10 bash[20034]: audit 2026-03-08T23:18:51.585723+0000 mon.b (mon.2) 7 : audit [INF] from='osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:18:52.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:52 vm10 bash[20034]: audit 2026-03-08T23:18:51.585787+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:52.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:52 vm10 bash[20034]: audit 2026-03-08T23:18:51.585787+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:52.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:52 vm10 bash[20034]: audit 2026-03-08T23:18:51.586331+0000 mon.a (mon.0) 384 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:18:52.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:52 vm10 bash[20034]: audit 2026-03-08T23:18:51.586331+0000 mon.a (mon.0) 384 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:18:53.603 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:53 vm04 bash[19918]: cluster 2026-03-08T23:18:52.191441+0000 mgr.x (mgr.14150) 132 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:53.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:53 vm04 bash[19918]: cluster 2026-03-08T23:18:52.191441+0000 mgr.x (mgr.14150) 132 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:53.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:53 vm04 bash[19918]: audit 2026-03-08T23:18:52.584484+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:18:53.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:53 vm04 bash[19918]: audit 2026-03-08T23:18:52.584484+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:18:53.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:53 vm04 bash[19918]: cluster 2026-03-08T23:18:52.587286+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-08T23:18:53.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:53 vm04 bash[19918]: cluster 2026-03-08T23:18:52.587286+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-08T23:18:53.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:53 vm04 bash[19918]: audit 2026-03-08T23:18:52.588006+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:53.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:53 vm04 bash[19918]: audit 2026-03-08T23:18:52.588006+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:53.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:53 vm04 bash[19918]: audit 2026-03-08T23:18:52.589417+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:53.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:53 vm04 bash[19918]: audit 2026-03-08T23:18:52.589417+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:53.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:53 vm04 bash[19918]: audit 2026-03-08T23:18:53.287442+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:53.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:53 vm04 bash[19918]: audit 2026-03-08T23:18:53.287442+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:53.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:53 vm04 bash[19918]: audit 2026-03-08T23:18:53.292582+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:53.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:53 vm04 bash[19918]: audit 2026-03-08T23:18:53.292582+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:53 vm02 bash[17457]: cluster 2026-03-08T23:18:52.191441+0000 mgr.x (mgr.14150) 132 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:53 vm02 bash[17457]: cluster 2026-03-08T23:18:52.191441+0000 mgr.x (mgr.14150) 132 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:53 vm02 bash[17457]: audit 2026-03-08T23:18:52.584484+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:18:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:53 vm02 bash[17457]: audit 2026-03-08T23:18:52.584484+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:18:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:53 vm02 bash[17457]: cluster 2026-03-08T23:18:52.587286+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-08T23:18:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:53 vm02 bash[17457]: cluster 2026-03-08T23:18:52.587286+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-08T23:18:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:53 vm02 bash[17457]: audit 2026-03-08T23:18:52.588006+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:53 vm02 bash[17457]: audit 2026-03-08T23:18:52.588006+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:53 vm02 bash[17457]: audit 2026-03-08T23:18:52.589417+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:53 vm02 bash[17457]: audit 2026-03-08T23:18:52.589417+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:53 vm02 bash[17457]: audit 2026-03-08T23:18:53.287442+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:53 vm02 bash[17457]: audit 2026-03-08T23:18:53.287442+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:53 vm02 bash[17457]: audit 2026-03-08T23:18:53.292582+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:53 vm02 bash[17457]: audit 2026-03-08T23:18:53.292582+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:53.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:53 vm10 bash[20034]: cluster 2026-03-08T23:18:52.191441+0000 mgr.x (mgr.14150) 132 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:53.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:53 vm10 bash[20034]: cluster 2026-03-08T23:18:52.191441+0000 mgr.x (mgr.14150) 132 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:53.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:53 vm10 bash[20034]: audit 2026-03-08T23:18:52.584484+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:18:53.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:53 vm10 bash[20034]: audit 2026-03-08T23:18:52.584484+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:18:53.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:53 vm10 bash[20034]: cluster 2026-03-08T23:18:52.587286+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-08T23:18:53.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:53 vm10 bash[20034]: cluster 2026-03-08T23:18:52.587286+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-08T23:18:53.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:53 vm10 bash[20034]: audit 2026-03-08T23:18:52.588006+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:53.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:53 vm10 bash[20034]: audit 2026-03-08T23:18:52.588006+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:53.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:53 vm10 bash[20034]: audit 2026-03-08T23:18:52.589417+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:53.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:53 vm10 bash[20034]: audit 2026-03-08T23:18:52.589417+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:53.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:53 vm10 bash[20034]: audit 2026-03-08T23:18:53.287442+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:53.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:53 vm10 bash[20034]: audit 2026-03-08T23:18:53.287442+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:53.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:53 vm10 bash[20034]: audit 2026-03-08T23:18:53.292582+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:53.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:53 vm10 bash[20034]: audit 2026-03-08T23:18:53.292582+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.361 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 2 on host 'vm04' 2026-03-08T23:18:54.437 DEBUG:teuthology.orchestra.run.vm04:osd.2> sudo journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.2.service 2026-03-08T23:18:54.438 INFO:tasks.cephadm:Deploying osd.3 on vm04 with /dev/vdd... 2026-03-08T23:18:54.438 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- lvm zap /dev/vdd 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: cluster 2026-03-08T23:18:52.316932+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: cluster 2026-03-08T23:18:52.316932+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: cluster 2026-03-08T23:18:52.316974+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: cluster 2026-03-08T23:18:52.316974+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:53.596014+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:53.596014+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: cluster 2026-03-08T23:18:53.599601+0000 mon.a (mon.0) 392 : cluster [INF] osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672] boot 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: cluster 2026-03-08T23:18:53.599601+0000 mon.a (mon.0) 392 : cluster [INF] osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672] boot 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: cluster 2026-03-08T23:18:53.599716+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: cluster 2026-03-08T23:18:53.599716+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:53.600604+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:53.600604+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:53.703369+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:53.703369+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:53.703982+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:53.703982+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:53.708853+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:53.708853+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:54.224626+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:54.224626+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:54.348179+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:54.348179+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:54.353135+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:54.353135+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:54.358468+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:54 vm04 bash[19918]: audit 2026-03-08T23:18:54.358468+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: cluster 2026-03-08T23:18:52.316932+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: cluster 2026-03-08T23:18:52.316932+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: cluster 2026-03-08T23:18:52.316974+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: cluster 2026-03-08T23:18:52.316974+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:53.596014+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:53.596014+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: cluster 2026-03-08T23:18:53.599601+0000 mon.a (mon.0) 392 : cluster [INF] osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672] boot 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: cluster 2026-03-08T23:18:53.599601+0000 mon.a (mon.0) 392 : cluster [INF] osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672] boot 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: cluster 2026-03-08T23:18:53.599716+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: cluster 2026-03-08T23:18:53.599716+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:53.600604+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:53.600604+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:53.703369+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:53.703369+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:53.703982+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:53.703982+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:53.708853+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:53.708853+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:54.224626+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:54.224626+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:54.348179+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:54.348179+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:54.353135+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:54.353135+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:54.358468+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:54 vm02 bash[17457]: audit 2026-03-08T23:18:54.358468+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: cluster 2026-03-08T23:18:52.316932+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:18:54.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: cluster 2026-03-08T23:18:52.316932+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:18:54.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: cluster 2026-03-08T23:18:52.316974+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: cluster 2026-03-08T23:18:52.316974+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:53.596014+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:53.596014+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: cluster 2026-03-08T23:18:53.599601+0000 mon.a (mon.0) 392 : cluster [INF] osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672] boot 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: cluster 2026-03-08T23:18:53.599601+0000 mon.a (mon.0) 392 : cluster [INF] osd.2 [v2:192.168.123.104:6800/1030884672,v1:192.168.123.104:6801/1030884672] boot 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: cluster 2026-03-08T23:18:53.599716+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: cluster 2026-03-08T23:18:53.599716+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:53.600604+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:53.600604+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:53.703369+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:53.703369+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:53.703982+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:53.703982+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:53.708853+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:53.708853+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:54.224626+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:54.224626+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:54.348179+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:54.348179+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:54.353135+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:54.353135+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:54.358468+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:54.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:54 vm10 bash[20034]: audit 2026-03-08T23:18:54.358468+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:18:56.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:55 vm04 bash[19918]: cluster 2026-03-08T23:18:54.191727+0000 mgr.x (mgr.14150) 133 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:56.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:55 vm04 bash[19918]: cluster 2026-03-08T23:18:54.191727+0000 mgr.x (mgr.14150) 133 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:56.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:55 vm04 bash[19918]: audit 2026-03-08T23:18:54.713211+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-08T23:18:56.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:55 vm04 bash[19918]: audit 2026-03-08T23:18:54.713211+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-08T23:18:56.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:55 vm04 bash[19918]: cluster 2026-03-08T23:18:54.715495+0000 mon.a (mon.0) 403 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-08T23:18:56.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:55 vm04 bash[19918]: cluster 2026-03-08T23:18:54.715495+0000 mon.a (mon.0) 403 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-08T23:18:56.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:55 vm04 bash[19918]: audit 2026-03-08T23:18:54.716984+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-08T23:18:56.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:55 vm04 bash[19918]: audit 2026-03-08T23:18:54.716984+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-08T23:18:56.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:55 vm02 bash[17457]: cluster 2026-03-08T23:18:54.191727+0000 mgr.x (mgr.14150) 133 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:56.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:55 vm02 bash[17457]: cluster 2026-03-08T23:18:54.191727+0000 mgr.x (mgr.14150) 133 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:56.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:55 vm02 bash[17457]: audit 2026-03-08T23:18:54.713211+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-08T23:18:56.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:55 vm02 bash[17457]: audit 2026-03-08T23:18:54.713211+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-08T23:18:56.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:55 vm02 bash[17457]: cluster 2026-03-08T23:18:54.715495+0000 mon.a (mon.0) 403 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-08T23:18:56.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:55 vm02 bash[17457]: cluster 2026-03-08T23:18:54.715495+0000 mon.a (mon.0) 403 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-08T23:18:56.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:55 vm02 bash[17457]: audit 2026-03-08T23:18:54.716984+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-08T23:18:56.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:55 vm02 bash[17457]: audit 2026-03-08T23:18:54.716984+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-08T23:18:56.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:55 vm10 bash[20034]: cluster 2026-03-08T23:18:54.191727+0000 mgr.x (mgr.14150) 133 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:56.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:55 vm10 bash[20034]: cluster 2026-03-08T23:18:54.191727+0000 mgr.x (mgr.14150) 133 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T23:18:56.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:55 vm10 bash[20034]: audit 2026-03-08T23:18:54.713211+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-08T23:18:56.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:55 vm10 bash[20034]: audit 2026-03-08T23:18:54.713211+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-08T23:18:56.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:55 vm10 bash[20034]: cluster 2026-03-08T23:18:54.715495+0000 mon.a (mon.0) 403 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-08T23:18:56.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:55 vm10 bash[20034]: cluster 2026-03-08T23:18:54.715495+0000 mon.a (mon.0) 403 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-08T23:18:56.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:55 vm10 bash[20034]: audit 2026-03-08T23:18:54.716984+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-08T23:18:56.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:55 vm10 bash[20034]: audit 2026-03-08T23:18:54.716984+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.722431+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.722431+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: cluster 2026-03-08T23:18:55.724921+0000 mon.a (mon.0) 406 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: cluster 2026-03-08T23:18:55.724921+0000 mon.a (mon.0) 406 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.913372+0000 mon.a (mon.0) 407 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.913372+0000 mon.a (mon.0) 407 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.932749+0000 mon.a (mon.0) 408 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.932749+0000 mon.a (mon.0) 408 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.933119+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.933119+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.933213+0000 mon.a (mon.0) 410 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.933213+0000 mon.a (mon.0) 410 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.933265+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.933265+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.934507+0000 mon.c (mon.1) 4 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.934507+0000 mon.c (mon.1) 4 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.934952+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.934952+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.934993+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.934993+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.935025+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.935025+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.951760+0000 mon.c (mon.1) 5 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.951760+0000 mon.c (mon.1) 5 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.953620+0000 mon.b (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.953620+0000 mon.b (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.954066+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.954066+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.954137+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.954137+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.954214+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.954214+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.968883+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:56 vm04 bash[19918]: audit 2026-03-08T23:18:55.968883+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.722431+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-08T23:18:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.722431+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-08T23:18:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: cluster 2026-03-08T23:18:55.724921+0000 mon.a (mon.0) 406 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-08T23:18:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: cluster 2026-03-08T23:18:55.724921+0000 mon.a (mon.0) 406 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-08T23:18:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.913372+0000 mon.a (mon.0) 407 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.913372+0000 mon.a (mon.0) 407 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.932749+0000 mon.a (mon.0) 408 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.932749+0000 mon.a (mon.0) 408 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.933119+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.933119+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.933213+0000 mon.a (mon.0) 410 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.933213+0000 mon.a (mon.0) 410 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.933265+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.933265+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.934507+0000 mon.c (mon.1) 4 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.934507+0000 mon.c (mon.1) 4 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.934952+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.934952+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.934993+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.934993+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.935025+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.935025+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.951760+0000 mon.c (mon.1) 5 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.951760+0000 mon.c (mon.1) 5 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.953620+0000 mon.b (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.953620+0000 mon.b (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.954066+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.954066+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.954137+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.954137+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.954214+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.954214+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.968883+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:56 vm02 bash[17457]: audit 2026-03-08T23:18:55.968883+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.722431+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.722431+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: cluster 2026-03-08T23:18:55.724921+0000 mon.a (mon.0) 406 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: cluster 2026-03-08T23:18:55.724921+0000 mon.a (mon.0) 406 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.913372+0000 mon.a (mon.0) 407 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.913372+0000 mon.a (mon.0) 407 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.932749+0000 mon.a (mon.0) 408 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.932749+0000 mon.a (mon.0) 408 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.933119+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.933119+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.933213+0000 mon.a (mon.0) 410 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.933213+0000 mon.a (mon.0) 410 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.933265+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.933265+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.934507+0000 mon.c (mon.1) 4 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.934507+0000 mon.c (mon.1) 4 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.934952+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.934952+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.934993+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.934993+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.935025+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.935025+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.951760+0000 mon.c (mon.1) 5 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.951760+0000 mon.c (mon.1) 5 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.953620+0000 mon.b (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.953620+0000 mon.b (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.954066+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.954066+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.954137+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.954137+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.954214+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.954214+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.968883+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:57.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:56 vm10 bash[20034]: audit 2026-03-08T23:18:55.968883+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T23:18:58.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:57 vm04 bash[19918]: cluster 2026-03-08T23:18:56.191993+0000 mgr.x (mgr.14150) 134 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:18:58.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:57 vm04 bash[19918]: cluster 2026-03-08T23:18:56.191993+0000 mgr.x (mgr.14150) 134 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:18:58.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:57 vm04 bash[19918]: cluster 2026-03-08T23:18:56.745696+0000 mon.a (mon.0) 418 : cluster [DBG] mgrmap e14: x(active, since 2m) 2026-03-08T23:18:58.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:57 vm04 bash[19918]: cluster 2026-03-08T23:18:56.745696+0000 mon.a (mon.0) 418 : cluster [DBG] mgrmap e14: x(active, since 2m) 2026-03-08T23:18:58.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:57 vm04 bash[19918]: cluster 2026-03-08T23:18:56.745739+0000 mon.a (mon.0) 419 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-08T23:18:58.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:57 vm04 bash[19918]: cluster 2026-03-08T23:18:56.745739+0000 mon.a (mon.0) 419 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-08T23:18:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:57 vm02 bash[17457]: cluster 2026-03-08T23:18:56.191993+0000 mgr.x (mgr.14150) 134 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:18:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:57 vm02 bash[17457]: cluster 2026-03-08T23:18:56.191993+0000 mgr.x (mgr.14150) 134 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:18:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:57 vm02 bash[17457]: cluster 2026-03-08T23:18:56.745696+0000 mon.a (mon.0) 418 : cluster [DBG] mgrmap e14: x(active, since 2m) 2026-03-08T23:18:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:57 vm02 bash[17457]: cluster 2026-03-08T23:18:56.745696+0000 mon.a (mon.0) 418 : cluster [DBG] mgrmap e14: x(active, since 2m) 2026-03-08T23:18:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:57 vm02 bash[17457]: cluster 2026-03-08T23:18:56.745739+0000 mon.a (mon.0) 419 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-08T23:18:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:57 vm02 bash[17457]: cluster 2026-03-08T23:18:56.745739+0000 mon.a (mon.0) 419 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-08T23:18:58.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:57 vm10 bash[20034]: cluster 2026-03-08T23:18:56.191993+0000 mgr.x (mgr.14150) 134 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:18:58.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:57 vm10 bash[20034]: cluster 2026-03-08T23:18:56.191993+0000 mgr.x (mgr.14150) 134 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:18:58.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:57 vm10 bash[20034]: cluster 2026-03-08T23:18:56.745696+0000 mon.a (mon.0) 418 : cluster [DBG] mgrmap e14: x(active, since 2m) 2026-03-08T23:18:58.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:57 vm10 bash[20034]: cluster 2026-03-08T23:18:56.745696+0000 mon.a (mon.0) 418 : cluster [DBG] mgrmap e14: x(active, since 2m) 2026-03-08T23:18:58.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:57 vm10 bash[20034]: cluster 2026-03-08T23:18:56.745739+0000 mon.a (mon.0) 419 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-08T23:18:58.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:57 vm10 bash[20034]: cluster 2026-03-08T23:18:56.745739+0000 mon.a (mon.0) 419 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-08T23:18:59.088 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.b/config 2026-03-08T23:19:00.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:59 vm04 bash[19918]: cluster 2026-03-08T23:18:58.192263+0000 mgr.x (mgr.14150) 135 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:00.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:18:59 vm04 bash[19918]: cluster 2026-03-08T23:18:58.192263+0000 mgr.x (mgr.14150) 135 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:00.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:59 vm02 bash[17457]: cluster 2026-03-08T23:18:58.192263+0000 mgr.x (mgr.14150) 135 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:00.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:18:59 vm02 bash[17457]: cluster 2026-03-08T23:18:58.192263+0000 mgr.x (mgr.14150) 135 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:00.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:59 vm10 bash[20034]: cluster 2026-03-08T23:18:58.192263+0000 mgr.x (mgr.14150) 135 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:00.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:18:59 vm10 bash[20034]: cluster 2026-03-08T23:18:58.192263+0000 mgr.x (mgr.14150) 135 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:00.601 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-08T23:19:00.617 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph orch daemon add osd vm04:/dev/vdd 2026-03-08T23:19:00.873 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: cephadm 2026-03-08T23:18:59.863516+0000 mgr.x (mgr.14150) 136 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:19:00.873 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: cephadm 2026-03-08T23:18:59.863516+0000 mgr.x (mgr.14150) 136 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:19:00.873 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: audit 2026-03-08T23:18:59.869128+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:00.873 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: audit 2026-03-08T23:18:59.869128+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:00.873 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: audit 2026-03-08T23:18:59.872782+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:00.873 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: audit 2026-03-08T23:18:59.872782+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:00.873 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: audit 2026-03-08T23:18:59.873435+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:00.873 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: audit 2026-03-08T23:18:59.873435+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:00.873 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: cephadm 2026-03-08T23:18:59.873791+0000 mgr.x (mgr.14150) 137 : cephadm [INF] Adjusting osd_memory_target on vm04 to 4551M 2026-03-08T23:19:00.873 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: cephadm 2026-03-08T23:18:59.873791+0000 mgr.x (mgr.14150) 137 : cephadm [INF] Adjusting osd_memory_target on vm04 to 4551M 2026-03-08T23:19:00.873 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: audit 2026-03-08T23:18:59.876357+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:00.873 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: audit 2026-03-08T23:18:59.876357+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:00.873 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: audit 2026-03-08T23:18:59.877608+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:00.873 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: audit 2026-03-08T23:18:59.877608+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:00.873 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: audit 2026-03-08T23:18:59.878066+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:00.873 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: audit 2026-03-08T23:18:59.878066+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:00.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: audit 2026-03-08T23:18:59.881092+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:00.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:00 vm04 bash[19918]: audit 2026-03-08T23:18:59.881092+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: cephadm 2026-03-08T23:18:59.863516+0000 mgr.x (mgr.14150) 136 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: cephadm 2026-03-08T23:18:59.863516+0000 mgr.x (mgr.14150) 136 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: audit 2026-03-08T23:18:59.869128+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: audit 2026-03-08T23:18:59.869128+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: audit 2026-03-08T23:18:59.872782+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: audit 2026-03-08T23:18:59.872782+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: audit 2026-03-08T23:18:59.873435+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: audit 2026-03-08T23:18:59.873435+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: cephadm 2026-03-08T23:18:59.873791+0000 mgr.x (mgr.14150) 137 : cephadm [INF] Adjusting osd_memory_target on vm04 to 4551M 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: cephadm 2026-03-08T23:18:59.873791+0000 mgr.x (mgr.14150) 137 : cephadm [INF] Adjusting osd_memory_target on vm04 to 4551M 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: audit 2026-03-08T23:18:59.876357+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: audit 2026-03-08T23:18:59.876357+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: audit 2026-03-08T23:18:59.877608+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: audit 2026-03-08T23:18:59.877608+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: audit 2026-03-08T23:18:59.878066+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: audit 2026-03-08T23:18:59.878066+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: audit 2026-03-08T23:18:59.881092+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:00 vm02 bash[17457]: audit 2026-03-08T23:18:59.881092+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:01.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: cephadm 2026-03-08T23:18:59.863516+0000 mgr.x (mgr.14150) 136 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:19:01.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: cephadm 2026-03-08T23:18:59.863516+0000 mgr.x (mgr.14150) 136 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:19:01.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: audit 2026-03-08T23:18:59.869128+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:01.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: audit 2026-03-08T23:18:59.869128+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:01.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: audit 2026-03-08T23:18:59.872782+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:01.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: audit 2026-03-08T23:18:59.872782+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:01.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: audit 2026-03-08T23:18:59.873435+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:01.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: audit 2026-03-08T23:18:59.873435+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:01.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: cephadm 2026-03-08T23:18:59.873791+0000 mgr.x (mgr.14150) 137 : cephadm [INF] Adjusting osd_memory_target on vm04 to 4551M 2026-03-08T23:19:01.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: cephadm 2026-03-08T23:18:59.873791+0000 mgr.x (mgr.14150) 137 : cephadm [INF] Adjusting osd_memory_target on vm04 to 4551M 2026-03-08T23:19:01.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: audit 2026-03-08T23:18:59.876357+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:01.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: audit 2026-03-08T23:18:59.876357+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:01.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: audit 2026-03-08T23:18:59.877608+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:01.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: audit 2026-03-08T23:18:59.877608+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:01.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: audit 2026-03-08T23:18:59.878066+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:01.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: audit 2026-03-08T23:18:59.878066+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:01.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: audit 2026-03-08T23:18:59.881092+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:01.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:00 vm10 bash[20034]: audit 2026-03-08T23:18:59.881092+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:01 vm02 bash[17457]: cluster 2026-03-08T23:19:00.192493+0000 mgr.x (mgr.14150) 138 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:01 vm02 bash[17457]: cluster 2026-03-08T23:19:00.192493+0000 mgr.x (mgr.14150) 138 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:02.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:01 vm10 bash[20034]: cluster 2026-03-08T23:19:00.192493+0000 mgr.x (mgr.14150) 138 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:02.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:01 vm10 bash[20034]: cluster 2026-03-08T23:19:00.192493+0000 mgr.x (mgr.14150) 138 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:02.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:01 vm04 bash[19918]: cluster 2026-03-08T23:19:00.192493+0000 mgr.x (mgr.14150) 138 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:02.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:01 vm04 bash[19918]: cluster 2026-03-08T23:19:00.192493+0000 mgr.x (mgr.14150) 138 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:04.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:03 vm02 bash[17457]: cluster 2026-03-08T23:19:02.192788+0000 mgr.x (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:04.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:03 vm02 bash[17457]: cluster 2026-03-08T23:19:02.192788+0000 mgr.x (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:04.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:03 vm10 bash[20034]: cluster 2026-03-08T23:19:02.192788+0000 mgr.x (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:04.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:03 vm10 bash[20034]: cluster 2026-03-08T23:19:02.192788+0000 mgr.x (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:04.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:03 vm04 bash[19918]: cluster 2026-03-08T23:19:02.192788+0000 mgr.x (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:04.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:03 vm04 bash[19918]: cluster 2026-03-08T23:19:02.192788+0000 mgr.x (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:05.224 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.b/config 2026-03-08T23:19:06.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:05 vm10 bash[20034]: cluster 2026-03-08T23:19:04.193066+0000 mgr.x (mgr.14150) 140 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:06.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:05 vm10 bash[20034]: cluster 2026-03-08T23:19:04.193066+0000 mgr.x (mgr.14150) 140 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:06.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:05 vm10 bash[20034]: audit 2026-03-08T23:19:05.475813+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:19:06.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:05 vm10 bash[20034]: audit 2026-03-08T23:19:05.475813+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:19:06.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:05 vm10 bash[20034]: audit 2026-03-08T23:19:05.477134+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:19:06.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:05 vm10 bash[20034]: audit 2026-03-08T23:19:05.477134+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:19:06.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:05 vm10 bash[20034]: audit 2026-03-08T23:19:05.477636+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:06.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:05 vm10 bash[20034]: audit 2026-03-08T23:19:05.477636+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:06.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:05 vm04 bash[19918]: cluster 2026-03-08T23:19:04.193066+0000 mgr.x (mgr.14150) 140 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:06.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:05 vm04 bash[19918]: cluster 2026-03-08T23:19:04.193066+0000 mgr.x (mgr.14150) 140 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:06.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:05 vm04 bash[19918]: audit 2026-03-08T23:19:05.475813+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:19:06.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:05 vm04 bash[19918]: audit 2026-03-08T23:19:05.475813+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:19:06.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:05 vm04 bash[19918]: audit 2026-03-08T23:19:05.477134+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:19:06.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:05 vm04 bash[19918]: audit 2026-03-08T23:19:05.477134+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:19:06.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:05 vm04 bash[19918]: audit 2026-03-08T23:19:05.477636+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:06.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:05 vm04 bash[19918]: audit 2026-03-08T23:19:05.477636+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:06.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:05 vm02 bash[17457]: cluster 2026-03-08T23:19:04.193066+0000 mgr.x (mgr.14150) 140 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:06.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:05 vm02 bash[17457]: cluster 2026-03-08T23:19:04.193066+0000 mgr.x (mgr.14150) 140 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:06.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:05 vm02 bash[17457]: audit 2026-03-08T23:19:05.475813+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:19:06.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:05 vm02 bash[17457]: audit 2026-03-08T23:19:05.475813+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:19:06.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:05 vm02 bash[17457]: audit 2026-03-08T23:19:05.477134+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:19:06.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:05 vm02 bash[17457]: audit 2026-03-08T23:19:05.477134+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:19:06.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:05 vm02 bash[17457]: audit 2026-03-08T23:19:05.477636+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:06.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:05 vm02 bash[17457]: audit 2026-03-08T23:19:05.477636+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:07.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:06 vm04 bash[19918]: audit 2026-03-08T23:19:05.474410+0000 mgr.x (mgr.14150) 141 : audit [DBG] from='client.24179 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:19:07.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:06 vm04 bash[19918]: audit 2026-03-08T23:19:05.474410+0000 mgr.x (mgr.14150) 141 : audit [DBG] from='client.24179 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:19:07.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:06 vm02 bash[17457]: audit 2026-03-08T23:19:05.474410+0000 mgr.x (mgr.14150) 141 : audit [DBG] from='client.24179 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:19:07.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:06 vm02 bash[17457]: audit 2026-03-08T23:19:05.474410+0000 mgr.x (mgr.14150) 141 : audit [DBG] from='client.24179 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:19:07.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:06 vm10 bash[20034]: audit 2026-03-08T23:19:05.474410+0000 mgr.x (mgr.14150) 141 : audit [DBG] from='client.24179 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:19:07.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:06 vm10 bash[20034]: audit 2026-03-08T23:19:05.474410+0000 mgr.x (mgr.14150) 141 : audit [DBG] from='client.24179 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:19:08.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:07 vm04 bash[19918]: cluster 2026-03-08T23:19:06.193406+0000 mgr.x (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:07 vm04 bash[19918]: cluster 2026-03-08T23:19:06.193406+0000 mgr.x (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:07 vm02 bash[17457]: cluster 2026-03-08T23:19:06.193406+0000 mgr.x (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:07 vm02 bash[17457]: cluster 2026-03-08T23:19:06.193406+0000 mgr.x (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:08.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:07 vm10 bash[20034]: cluster 2026-03-08T23:19:06.193406+0000 mgr.x (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:08.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:07 vm10 bash[20034]: cluster 2026-03-08T23:19:06.193406+0000 mgr.x (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:10.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:09 vm04 bash[19918]: cluster 2026-03-08T23:19:08.193698+0000 mgr.x (mgr.14150) 143 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:10.266 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:09 vm04 bash[19918]: cluster 2026-03-08T23:19:08.193698+0000 mgr.x (mgr.14150) 143 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:10.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:09 vm02 bash[17457]: cluster 2026-03-08T23:19:08.193698+0000 mgr.x (mgr.14150) 143 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:10.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:09 vm02 bash[17457]: cluster 2026-03-08T23:19:08.193698+0000 mgr.x (mgr.14150) 143 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:10.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:09 vm10 bash[20034]: cluster 2026-03-08T23:19:08.193698+0000 mgr.x (mgr.14150) 143 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:10.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:09 vm10 bash[20034]: cluster 2026-03-08T23:19:08.193698+0000 mgr.x (mgr.14150) 143 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:11.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:10 vm04 bash[19918]: audit 2026-03-08T23:19:10.827584+0000 mon.b (mon.2) 10 : audit [INF] from='client.? 192.168.123.104:0/1688428519' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]: dispatch 2026-03-08T23:19:11.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:10 vm04 bash[19918]: audit 2026-03-08T23:19:10.827584+0000 mon.b (mon.2) 10 : audit [INF] from='client.? 192.168.123.104:0/1688428519' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]: dispatch 2026-03-08T23:19:11.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:10 vm04 bash[19918]: audit 2026-03-08T23:19:10.828220+0000 mon.a (mon.0) 430 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]: dispatch 2026-03-08T23:19:11.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:10 vm04 bash[19918]: audit 2026-03-08T23:19:10.828220+0000 mon.a (mon.0) 430 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]: dispatch 2026-03-08T23:19:11.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:10 vm04 bash[19918]: audit 2026-03-08T23:19:10.831359+0000 mon.a (mon.0) 431 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]': finished 2026-03-08T23:19:11.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:10 vm04 bash[19918]: audit 2026-03-08T23:19:10.831359+0000 mon.a (mon.0) 431 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]': finished 2026-03-08T23:19:11.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:10 vm04 bash[19918]: cluster 2026-03-08T23:19:10.833787+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-08T23:19:11.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:10 vm04 bash[19918]: cluster 2026-03-08T23:19:10.833787+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-08T23:19:11.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:10 vm04 bash[19918]: audit 2026-03-08T23:19:10.833889+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:11.265 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:10 vm04 bash[19918]: audit 2026-03-08T23:19:10.833889+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:10 vm02 bash[17457]: audit 2026-03-08T23:19:10.827584+0000 mon.b (mon.2) 10 : audit [INF] from='client.? 192.168.123.104:0/1688428519' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]: dispatch 2026-03-08T23:19:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:10 vm02 bash[17457]: audit 2026-03-08T23:19:10.827584+0000 mon.b (mon.2) 10 : audit [INF] from='client.? 192.168.123.104:0/1688428519' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]: dispatch 2026-03-08T23:19:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:10 vm02 bash[17457]: audit 2026-03-08T23:19:10.828220+0000 mon.a (mon.0) 430 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]: dispatch 2026-03-08T23:19:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:10 vm02 bash[17457]: audit 2026-03-08T23:19:10.828220+0000 mon.a (mon.0) 430 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]: dispatch 2026-03-08T23:19:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:10 vm02 bash[17457]: audit 2026-03-08T23:19:10.831359+0000 mon.a (mon.0) 431 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]': finished 2026-03-08T23:19:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:10 vm02 bash[17457]: audit 2026-03-08T23:19:10.831359+0000 mon.a (mon.0) 431 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]': finished 2026-03-08T23:19:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:10 vm02 bash[17457]: cluster 2026-03-08T23:19:10.833787+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-08T23:19:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:10 vm02 bash[17457]: cluster 2026-03-08T23:19:10.833787+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-08T23:19:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:10 vm02 bash[17457]: audit 2026-03-08T23:19:10.833889+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:10 vm02 bash[17457]: audit 2026-03-08T23:19:10.833889+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:10 vm10 bash[20034]: audit 2026-03-08T23:19:10.827584+0000 mon.b (mon.2) 10 : audit [INF] from='client.? 192.168.123.104:0/1688428519' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]: dispatch 2026-03-08T23:19:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:10 vm10 bash[20034]: audit 2026-03-08T23:19:10.827584+0000 mon.b (mon.2) 10 : audit [INF] from='client.? 192.168.123.104:0/1688428519' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]: dispatch 2026-03-08T23:19:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:10 vm10 bash[20034]: audit 2026-03-08T23:19:10.828220+0000 mon.a (mon.0) 430 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]: dispatch 2026-03-08T23:19:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:10 vm10 bash[20034]: audit 2026-03-08T23:19:10.828220+0000 mon.a (mon.0) 430 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]: dispatch 2026-03-08T23:19:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:10 vm10 bash[20034]: audit 2026-03-08T23:19:10.831359+0000 mon.a (mon.0) 431 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]': finished 2026-03-08T23:19:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:10 vm10 bash[20034]: audit 2026-03-08T23:19:10.831359+0000 mon.a (mon.0) 431 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "754d7a6e-d6e9-4d53-b18d-fb8dd322dada"}]': finished 2026-03-08T23:19:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:10 vm10 bash[20034]: cluster 2026-03-08T23:19:10.833787+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-08T23:19:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:10 vm10 bash[20034]: cluster 2026-03-08T23:19:10.833787+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-08T23:19:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:10 vm10 bash[20034]: audit 2026-03-08T23:19:10.833889+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:10 vm10 bash[20034]: audit 2026-03-08T23:19:10.833889+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:12.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:11 vm04 bash[19918]: cluster 2026-03-08T23:19:10.193921+0000 mgr.x (mgr.14150) 144 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:12.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:11 vm04 bash[19918]: cluster 2026-03-08T23:19:10.193921+0000 mgr.x (mgr.14150) 144 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:12.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:11 vm04 bash[19918]: audit 2026-03-08T23:19:11.434483+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.104:0/2438515513' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:19:12.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:11 vm04 bash[19918]: audit 2026-03-08T23:19:11.434483+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.104:0/2438515513' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:19:12.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:11 vm02 bash[17457]: cluster 2026-03-08T23:19:10.193921+0000 mgr.x (mgr.14150) 144 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:12.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:11 vm02 bash[17457]: cluster 2026-03-08T23:19:10.193921+0000 mgr.x (mgr.14150) 144 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:12.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:11 vm02 bash[17457]: audit 2026-03-08T23:19:11.434483+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.104:0/2438515513' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:19:12.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:11 vm02 bash[17457]: audit 2026-03-08T23:19:11.434483+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.104:0/2438515513' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:19:12.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:11 vm10 bash[20034]: cluster 2026-03-08T23:19:10.193921+0000 mgr.x (mgr.14150) 144 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:12.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:11 vm10 bash[20034]: cluster 2026-03-08T23:19:10.193921+0000 mgr.x (mgr.14150) 144 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:12.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:11 vm10 bash[20034]: audit 2026-03-08T23:19:11.434483+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.104:0/2438515513' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:19:12.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:11 vm10 bash[20034]: audit 2026-03-08T23:19:11.434483+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.104:0/2438515513' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:19:14.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:13 vm04 bash[19918]: cluster 2026-03-08T23:19:12.194119+0000 mgr.x (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:14.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:13 vm04 bash[19918]: cluster 2026-03-08T23:19:12.194119+0000 mgr.x (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:14.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:13 vm02 bash[17457]: cluster 2026-03-08T23:19:12.194119+0000 mgr.x (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:14.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:13 vm02 bash[17457]: cluster 2026-03-08T23:19:12.194119+0000 mgr.x (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:14.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:13 vm10 bash[20034]: cluster 2026-03-08T23:19:12.194119+0000 mgr.x (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:14.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:13 vm10 bash[20034]: cluster 2026-03-08T23:19:12.194119+0000 mgr.x (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:16.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:15 vm04 bash[19918]: cluster 2026-03-08T23:19:14.194310+0000 mgr.x (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:16.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:15 vm04 bash[19918]: cluster 2026-03-08T23:19:14.194310+0000 mgr.x (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:16.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:15 vm02 bash[17457]: cluster 2026-03-08T23:19:14.194310+0000 mgr.x (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:16.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:15 vm02 bash[17457]: cluster 2026-03-08T23:19:14.194310+0000 mgr.x (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:16.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:15 vm10 bash[20034]: cluster 2026-03-08T23:19:14.194310+0000 mgr.x (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:16.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:15 vm10 bash[20034]: cluster 2026-03-08T23:19:14.194310+0000 mgr.x (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:18.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:17 vm04 bash[19918]: cluster 2026-03-08T23:19:16.194589+0000 mgr.x (mgr.14150) 147 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:18.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:17 vm04 bash[19918]: cluster 2026-03-08T23:19:16.194589+0000 mgr.x (mgr.14150) 147 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:18.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:17 vm02 bash[17457]: cluster 2026-03-08T23:19:16.194589+0000 mgr.x (mgr.14150) 147 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:18.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:17 vm02 bash[17457]: cluster 2026-03-08T23:19:16.194589+0000 mgr.x (mgr.14150) 147 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:18.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:17 vm10 bash[20034]: cluster 2026-03-08T23:19:16.194589+0000 mgr.x (mgr.14150) 147 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:18.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:17 vm10 bash[20034]: cluster 2026-03-08T23:19:16.194589+0000 mgr.x (mgr.14150) 147 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:19.542 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:19 vm04 bash[19918]: cluster 2026-03-08T23:19:18.194876+0000 mgr.x (mgr.14150) 148 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:19.543 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:19 vm04 bash[19918]: cluster 2026-03-08T23:19:18.194876+0000 mgr.x (mgr.14150) 148 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:19.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:19 vm02 bash[17457]: cluster 2026-03-08T23:19:18.194876+0000 mgr.x (mgr.14150) 148 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:19.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:19 vm02 bash[17457]: cluster 2026-03-08T23:19:18.194876+0000 mgr.x (mgr.14150) 148 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:19.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:19 vm10 bash[20034]: cluster 2026-03-08T23:19:18.194876+0000 mgr.x (mgr.14150) 148 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:19.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:19 vm10 bash[20034]: cluster 2026-03-08T23:19:18.194876+0000 mgr.x (mgr.14150) 148 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:21.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:21 vm04 bash[19918]: cluster 2026-03-08T23:19:20.195110+0000 mgr.x (mgr.14150) 149 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:21.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:21 vm04 bash[19918]: cluster 2026-03-08T23:19:20.195110+0000 mgr.x (mgr.14150) 149 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:21.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:21 vm04 bash[19918]: audit 2026-03-08T23:19:20.481986+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-08T23:19:21.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:21 vm04 bash[19918]: audit 2026-03-08T23:19:20.481986+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-08T23:19:21.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:21 vm04 bash[19918]: audit 2026-03-08T23:19:20.482536+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:21.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:21 vm04 bash[19918]: audit 2026-03-08T23:19:20.482536+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:21.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:21 vm04 bash[19918]: cephadm 2026-03-08T23:19:20.482978+0000 mgr.x (mgr.14150) 150 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-08T23:19:21.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:21 vm04 bash[19918]: cephadm 2026-03-08T23:19:20.482978+0000 mgr.x (mgr.14150) 150 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-08T23:19:21.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:21 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:19:21.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:21 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:19:21.876 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:19:21 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:19:21.876 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:19:21 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:19:21.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:21 vm02 bash[17457]: cluster 2026-03-08T23:19:20.195110+0000 mgr.x (mgr.14150) 149 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:21.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:21 vm02 bash[17457]: cluster 2026-03-08T23:19:20.195110+0000 mgr.x (mgr.14150) 149 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:21.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:21 vm02 bash[17457]: audit 2026-03-08T23:19:20.481986+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-08T23:19:21.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:21 vm02 bash[17457]: audit 2026-03-08T23:19:20.481986+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-08T23:19:21.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:21 vm02 bash[17457]: audit 2026-03-08T23:19:20.482536+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:21.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:21 vm02 bash[17457]: audit 2026-03-08T23:19:20.482536+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:21.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:21 vm02 bash[17457]: cephadm 2026-03-08T23:19:20.482978+0000 mgr.x (mgr.14150) 150 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-08T23:19:21.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:21 vm02 bash[17457]: cephadm 2026-03-08T23:19:20.482978+0000 mgr.x (mgr.14150) 150 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-08T23:19:21.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:21 vm10 bash[20034]: cluster 2026-03-08T23:19:20.195110+0000 mgr.x (mgr.14150) 149 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:21.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:21 vm10 bash[20034]: cluster 2026-03-08T23:19:20.195110+0000 mgr.x (mgr.14150) 149 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:21.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:21 vm10 bash[20034]: audit 2026-03-08T23:19:20.481986+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-08T23:19:21.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:21 vm10 bash[20034]: audit 2026-03-08T23:19:20.481986+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-08T23:19:21.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:21 vm10 bash[20034]: audit 2026-03-08T23:19:20.482536+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:21.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:21 vm10 bash[20034]: audit 2026-03-08T23:19:20.482536+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:21.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:21 vm10 bash[20034]: cephadm 2026-03-08T23:19:20.482978+0000 mgr.x (mgr.14150) 150 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-08T23:19:21.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:21 vm10 bash[20034]: cephadm 2026-03-08T23:19:20.482978+0000 mgr.x (mgr.14150) 150 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-08T23:19:22.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:22 vm04 bash[19918]: audit 2026-03-08T23:19:21.923523+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:22.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:22 vm04 bash[19918]: audit 2026-03-08T23:19:21.923523+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:22.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:22 vm04 bash[19918]: audit 2026-03-08T23:19:21.928894+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:22.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:22 vm04 bash[19918]: audit 2026-03-08T23:19:21.928894+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:22.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:22 vm04 bash[19918]: audit 2026-03-08T23:19:21.933686+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:22.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:22 vm04 bash[19918]: audit 2026-03-08T23:19:21.933686+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:22.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:22 vm02 bash[17457]: audit 2026-03-08T23:19:21.923523+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:22.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:22 vm02 bash[17457]: audit 2026-03-08T23:19:21.923523+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:22.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:22 vm02 bash[17457]: audit 2026-03-08T23:19:21.928894+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:22.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:22 vm02 bash[17457]: audit 2026-03-08T23:19:21.928894+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:22.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:22 vm02 bash[17457]: audit 2026-03-08T23:19:21.933686+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:22.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:22 vm02 bash[17457]: audit 2026-03-08T23:19:21.933686+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:22.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:22 vm10 bash[20034]: audit 2026-03-08T23:19:21.923523+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:22.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:22 vm10 bash[20034]: audit 2026-03-08T23:19:21.923523+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:22.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:22 vm10 bash[20034]: audit 2026-03-08T23:19:21.928894+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:22.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:22 vm10 bash[20034]: audit 2026-03-08T23:19:21.928894+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:22.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:22 vm10 bash[20034]: audit 2026-03-08T23:19:21.933686+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:22.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:22 vm10 bash[20034]: audit 2026-03-08T23:19:21.933686+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:24.330 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:24 vm04 bash[19918]: cluster 2026-03-08T23:19:22.195637+0000 mgr.x (mgr.14150) 151 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:24.330 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:24 vm04 bash[19918]: cluster 2026-03-08T23:19:22.195637+0000 mgr.x (mgr.14150) 151 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:24.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:24 vm02 bash[17457]: cluster 2026-03-08T23:19:22.195637+0000 mgr.x (mgr.14150) 151 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:24.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:24 vm02 bash[17457]: cluster 2026-03-08T23:19:22.195637+0000 mgr.x (mgr.14150) 151 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:24.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:24 vm10 bash[20034]: cluster 2026-03-08T23:19:22.195637+0000 mgr.x (mgr.14150) 151 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:24.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:24 vm10 bash[20034]: cluster 2026-03-08T23:19:22.195637+0000 mgr.x (mgr.14150) 151 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:25.554 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:25 vm04 bash[19918]: cluster 2026-03-08T23:19:24.195909+0000 mgr.x (mgr.14150) 152 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:25.554 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:25 vm04 bash[19918]: cluster 2026-03-08T23:19:24.195909+0000 mgr.x (mgr.14150) 152 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:25 vm02 bash[17457]: cluster 2026-03-08T23:19:24.195909+0000 mgr.x (mgr.14150) 152 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:25 vm02 bash[17457]: cluster 2026-03-08T23:19:24.195909+0000 mgr.x (mgr.14150) 152 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:25.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:25 vm10 bash[20034]: cluster 2026-03-08T23:19:24.195909+0000 mgr.x (mgr.14150) 152 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:25.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:25 vm10 bash[20034]: cluster 2026-03-08T23:19:24.195909+0000 mgr.x (mgr.14150) 152 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:26.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:26 vm04 bash[19918]: audit 2026-03-08T23:19:25.557974+0000 mon.b (mon.2) 12 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:19:26.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:26 vm04 bash[19918]: audit 2026-03-08T23:19:25.557974+0000 mon.b (mon.2) 12 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:19:26.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:26 vm04 bash[19918]: audit 2026-03-08T23:19:25.558569+0000 mon.a (mon.0) 439 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:19:26.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:26 vm04 bash[19918]: audit 2026-03-08T23:19:25.558569+0000 mon.a (mon.0) 439 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:19:26.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:26 vm02 bash[17457]: audit 2026-03-08T23:19:25.557974+0000 mon.b (mon.2) 12 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:19:26.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:26 vm02 bash[17457]: audit 2026-03-08T23:19:25.557974+0000 mon.b (mon.2) 12 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:19:26.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:26 vm02 bash[17457]: audit 2026-03-08T23:19:25.558569+0000 mon.a (mon.0) 439 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:19:26.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:26 vm02 bash[17457]: audit 2026-03-08T23:19:25.558569+0000 mon.a (mon.0) 439 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:19:26.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:26 vm10 bash[20034]: audit 2026-03-08T23:19:25.557974+0000 mon.b (mon.2) 12 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:19:26.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:26 vm10 bash[20034]: audit 2026-03-08T23:19:25.557974+0000 mon.b (mon.2) 12 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:19:26.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:26 vm10 bash[20034]: audit 2026-03-08T23:19:25.558569+0000 mon.a (mon.0) 439 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:19:26.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:26 vm10 bash[20034]: audit 2026-03-08T23:19:25.558569+0000 mon.a (mon.0) 439 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:19:27.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:27 vm04 bash[19918]: cluster 2026-03-08T23:19:26.196149+0000 mgr.x (mgr.14150) 153 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:27.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:27 vm04 bash[19918]: cluster 2026-03-08T23:19:26.196149+0000 mgr.x (mgr.14150) 153 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:27.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:27 vm04 bash[19918]: audit 2026-03-08T23:19:26.288129+0000 mon.a (mon.0) 440 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-08T23:19:27.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:27 vm04 bash[19918]: audit 2026-03-08T23:19:26.288129+0000 mon.a (mon.0) 440 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-08T23:19:27.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:27 vm04 bash[19918]: cluster 2026-03-08T23:19:26.290555+0000 mon.a (mon.0) 441 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-08T23:19:27.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:27 vm04 bash[19918]: cluster 2026-03-08T23:19:26.290555+0000 mon.a (mon.0) 441 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-08T23:19:27.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:27 vm04 bash[19918]: audit 2026-03-08T23:19:26.290800+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:27.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:27 vm04 bash[19918]: audit 2026-03-08T23:19:26.290800+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:27.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:27 vm04 bash[19918]: audit 2026-03-08T23:19:26.290952+0000 mon.b (mon.2) 13 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:19:27.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:27 vm04 bash[19918]: audit 2026-03-08T23:19:26.290952+0000 mon.b (mon.2) 13 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:19:27.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:27 vm04 bash[19918]: audit 2026-03-08T23:19:26.291466+0000 mon.a (mon.0) 443 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:19:27.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:27 vm04 bash[19918]: audit 2026-03-08T23:19:26.291466+0000 mon.a (mon.0) 443 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:19:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:27 vm02 bash[17457]: cluster 2026-03-08T23:19:26.196149+0000 mgr.x (mgr.14150) 153 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:27 vm02 bash[17457]: cluster 2026-03-08T23:19:26.196149+0000 mgr.x (mgr.14150) 153 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:27 vm02 bash[17457]: audit 2026-03-08T23:19:26.288129+0000 mon.a (mon.0) 440 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-08T23:19:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:27 vm02 bash[17457]: audit 2026-03-08T23:19:26.288129+0000 mon.a (mon.0) 440 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-08T23:19:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:27 vm02 bash[17457]: cluster 2026-03-08T23:19:26.290555+0000 mon.a (mon.0) 441 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-08T23:19:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:27 vm02 bash[17457]: cluster 2026-03-08T23:19:26.290555+0000 mon.a (mon.0) 441 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-08T23:19:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:27 vm02 bash[17457]: audit 2026-03-08T23:19:26.290800+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:27 vm02 bash[17457]: audit 2026-03-08T23:19:26.290800+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:27 vm02 bash[17457]: audit 2026-03-08T23:19:26.290952+0000 mon.b (mon.2) 13 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:19:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:27 vm02 bash[17457]: audit 2026-03-08T23:19:26.290952+0000 mon.b (mon.2) 13 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:19:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:27 vm02 bash[17457]: audit 2026-03-08T23:19:26.291466+0000 mon.a (mon.0) 443 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:19:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:27 vm02 bash[17457]: audit 2026-03-08T23:19:26.291466+0000 mon.a (mon.0) 443 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:19:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:27 vm10 bash[20034]: cluster 2026-03-08T23:19:26.196149+0000 mgr.x (mgr.14150) 153 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:27 vm10 bash[20034]: cluster 2026-03-08T23:19:26.196149+0000 mgr.x (mgr.14150) 153 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:27 vm10 bash[20034]: audit 2026-03-08T23:19:26.288129+0000 mon.a (mon.0) 440 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-08T23:19:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:27 vm10 bash[20034]: audit 2026-03-08T23:19:26.288129+0000 mon.a (mon.0) 440 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-08T23:19:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:27 vm10 bash[20034]: cluster 2026-03-08T23:19:26.290555+0000 mon.a (mon.0) 441 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-08T23:19:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:27 vm10 bash[20034]: cluster 2026-03-08T23:19:26.290555+0000 mon.a (mon.0) 441 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-08T23:19:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:27 vm10 bash[20034]: audit 2026-03-08T23:19:26.290800+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:27 vm10 bash[20034]: audit 2026-03-08T23:19:26.290800+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:27 vm10 bash[20034]: audit 2026-03-08T23:19:26.290952+0000 mon.b (mon.2) 13 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:19:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:27 vm10 bash[20034]: audit 2026-03-08T23:19:26.290952+0000 mon.b (mon.2) 13 : audit [INF] from='osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:19:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:27 vm10 bash[20034]: audit 2026-03-08T23:19:26.291466+0000 mon.a (mon.0) 443 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:19:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:27 vm10 bash[20034]: audit 2026-03-08T23:19:26.291466+0000 mon.a (mon.0) 443 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: audit 2026-03-08T23:19:27.290934+0000 mon.a (mon.0) 444 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: audit 2026-03-08T23:19:27.290934+0000 mon.a (mon.0) 444 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: cluster 2026-03-08T23:19:27.296122+0000 mon.a (mon.0) 445 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: cluster 2026-03-08T23:19:27.296122+0000 mon.a (mon.0) 445 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: audit 2026-03-08T23:19:27.297037+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: audit 2026-03-08T23:19:27.297037+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: audit 2026-03-08T23:19:27.297701+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: audit 2026-03-08T23:19:27.297701+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: audit 2026-03-08T23:19:27.990687+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: audit 2026-03-08T23:19:27.990687+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: audit 2026-03-08T23:19:27.995428+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: audit 2026-03-08T23:19:27.995428+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: audit 2026-03-08T23:19:27.996040+0000 mon.a (mon.0) 450 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: audit 2026-03-08T23:19:27.996040+0000 mon.a (mon.0) 450 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: audit 2026-03-08T23:19:27.996426+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: audit 2026-03-08T23:19:27.996426+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: audit 2026-03-08T23:19:27.999544+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: audit 2026-03-08T23:19:27.999544+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: cluster 2026-03-08T23:19:28.302169+0000 mon.a (mon.0) 453 : cluster [INF] osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314] boot 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: cluster 2026-03-08T23:19:28.302169+0000 mon.a (mon.0) 453 : cluster [INF] osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314] boot 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: cluster 2026-03-08T23:19:28.302243+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-08T23:19:28.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:28 vm04 bash[19918]: cluster 2026-03-08T23:19:28.302243+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: audit 2026-03-08T23:19:27.290934+0000 mon.a (mon.0) 444 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: audit 2026-03-08T23:19:27.290934+0000 mon.a (mon.0) 444 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: cluster 2026-03-08T23:19:27.296122+0000 mon.a (mon.0) 445 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: cluster 2026-03-08T23:19:27.296122+0000 mon.a (mon.0) 445 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: audit 2026-03-08T23:19:27.297037+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: audit 2026-03-08T23:19:27.297037+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: audit 2026-03-08T23:19:27.297701+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: audit 2026-03-08T23:19:27.297701+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: audit 2026-03-08T23:19:27.990687+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: audit 2026-03-08T23:19:27.990687+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: audit 2026-03-08T23:19:27.995428+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: audit 2026-03-08T23:19:27.995428+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: audit 2026-03-08T23:19:27.996040+0000 mon.a (mon.0) 450 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: audit 2026-03-08T23:19:27.996040+0000 mon.a (mon.0) 450 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: audit 2026-03-08T23:19:27.996426+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: audit 2026-03-08T23:19:27.996426+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: audit 2026-03-08T23:19:27.999544+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: audit 2026-03-08T23:19:27.999544+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: cluster 2026-03-08T23:19:28.302169+0000 mon.a (mon.0) 453 : cluster [INF] osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314] boot 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: cluster 2026-03-08T23:19:28.302169+0000 mon.a (mon.0) 453 : cluster [INF] osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314] boot 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: cluster 2026-03-08T23:19:28.302243+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-08T23:19:28.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:28 vm02 bash[17457]: cluster 2026-03-08T23:19:28.302243+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-08T23:19:28.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: audit 2026-03-08T23:19:27.290934+0000 mon.a (mon.0) 444 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:19:28.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: audit 2026-03-08T23:19:27.290934+0000 mon.a (mon.0) 444 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:19:28.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: cluster 2026-03-08T23:19:27.296122+0000 mon.a (mon.0) 445 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-08T23:19:28.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: cluster 2026-03-08T23:19:27.296122+0000 mon.a (mon.0) 445 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-08T23:19:28.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: audit 2026-03-08T23:19:27.297037+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:28.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: audit 2026-03-08T23:19:27.297037+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:28.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: audit 2026-03-08T23:19:27.297701+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:28.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: audit 2026-03-08T23:19:27.297701+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:28.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: audit 2026-03-08T23:19:27.990687+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: audit 2026-03-08T23:19:27.990687+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: audit 2026-03-08T23:19:27.995428+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: audit 2026-03-08T23:19:27.995428+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: audit 2026-03-08T23:19:27.996040+0000 mon.a (mon.0) 450 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:28.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: audit 2026-03-08T23:19:27.996040+0000 mon.a (mon.0) 450 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:28.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: audit 2026-03-08T23:19:27.996426+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:28.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: audit 2026-03-08T23:19:27.996426+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:28.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: audit 2026-03-08T23:19:27.999544+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: audit 2026-03-08T23:19:27.999544+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:28.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: cluster 2026-03-08T23:19:28.302169+0000 mon.a (mon.0) 453 : cluster [INF] osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314] boot 2026-03-08T23:19:28.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: cluster 2026-03-08T23:19:28.302169+0000 mon.a (mon.0) 453 : cluster [INF] osd.3 [v2:192.168.123.104:6808/953613314,v1:192.168.123.104:6809/953613314] boot 2026-03-08T23:19:28.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: cluster 2026-03-08T23:19:28.302243+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-08T23:19:28.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:28 vm10 bash[20034]: cluster 2026-03-08T23:19:28.302243+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-08T23:19:28.961 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 3 on host 'vm04' 2026-03-08T23:19:29.046 DEBUG:teuthology.orchestra.run.vm04:osd.3> sudo journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.3.service 2026-03-08T23:19:29.047 INFO:tasks.cephadm:Deploying osd.4 on vm04 with /dev/vdc... 2026-03-08T23:19:29.048 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- lvm zap /dev/vdc 2026-03-08T23:19:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:29 vm04 bash[19918]: cluster 2026-03-08T23:19:26.588730+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:19:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:29 vm04 bash[19918]: cluster 2026-03-08T23:19:26.588730+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:19:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:29 vm04 bash[19918]: cluster 2026-03-08T23:19:26.588767+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:19:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:29 vm04 bash[19918]: cluster 2026-03-08T23:19:26.588767+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:19:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:29 vm04 bash[19918]: cluster 2026-03-08T23:19:28.196351+0000 mgr.x (mgr.14150) 154 : cluster [DBG] pgmap v111: 1 pgs: 1 unknown; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:29 vm04 bash[19918]: cluster 2026-03-08T23:19:28.196351+0000 mgr.x (mgr.14150) 154 : cluster [DBG] pgmap v111: 1 pgs: 1 unknown; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:29 vm04 bash[19918]: audit 2026-03-08T23:19:28.302337+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:29 vm04 bash[19918]: audit 2026-03-08T23:19:28.302337+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:29 vm04 bash[19918]: audit 2026-03-08T23:19:28.948438+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:29 vm04 bash[19918]: audit 2026-03-08T23:19:28.948438+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:29 vm04 bash[19918]: audit 2026-03-08T23:19:28.953022+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:29 vm04 bash[19918]: audit 2026-03-08T23:19:28.953022+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:29 vm04 bash[19918]: audit 2026-03-08T23:19:28.956147+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:29 vm04 bash[19918]: audit 2026-03-08T23:19:28.956147+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:29 vm02 bash[17457]: cluster 2026-03-08T23:19:26.588730+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:19:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:29 vm02 bash[17457]: cluster 2026-03-08T23:19:26.588730+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:19:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:29 vm02 bash[17457]: cluster 2026-03-08T23:19:26.588767+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:19:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:29 vm02 bash[17457]: cluster 2026-03-08T23:19:26.588767+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:19:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:29 vm02 bash[17457]: cluster 2026-03-08T23:19:28.196351+0000 mgr.x (mgr.14150) 154 : cluster [DBG] pgmap v111: 1 pgs: 1 unknown; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:29 vm02 bash[17457]: cluster 2026-03-08T23:19:28.196351+0000 mgr.x (mgr.14150) 154 : cluster [DBG] pgmap v111: 1 pgs: 1 unknown; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:29 vm02 bash[17457]: audit 2026-03-08T23:19:28.302337+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:29 vm02 bash[17457]: audit 2026-03-08T23:19:28.302337+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:29 vm02 bash[17457]: audit 2026-03-08T23:19:28.948438+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:29 vm02 bash[17457]: audit 2026-03-08T23:19:28.948438+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:29 vm02 bash[17457]: audit 2026-03-08T23:19:28.953022+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:29 vm02 bash[17457]: audit 2026-03-08T23:19:28.953022+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:29 vm02 bash[17457]: audit 2026-03-08T23:19:28.956147+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:29 vm02 bash[17457]: audit 2026-03-08T23:19:28.956147+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:29 vm10 bash[20034]: cluster 2026-03-08T23:19:26.588730+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:19:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:29 vm10 bash[20034]: cluster 2026-03-08T23:19:26.588730+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:19:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:29 vm10 bash[20034]: cluster 2026-03-08T23:19:26.588767+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:19:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:29 vm10 bash[20034]: cluster 2026-03-08T23:19:26.588767+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:19:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:29 vm10 bash[20034]: cluster 2026-03-08T23:19:28.196351+0000 mgr.x (mgr.14150) 154 : cluster [DBG] pgmap v111: 1 pgs: 1 unknown; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:29 vm10 bash[20034]: cluster 2026-03-08T23:19:28.196351+0000 mgr.x (mgr.14150) 154 : cluster [DBG] pgmap v111: 1 pgs: 1 unknown; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:19:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:29 vm10 bash[20034]: audit 2026-03-08T23:19:28.302337+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:29 vm10 bash[20034]: audit 2026-03-08T23:19:28.302337+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:19:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:29 vm10 bash[20034]: audit 2026-03-08T23:19:28.948438+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:29 vm10 bash[20034]: audit 2026-03-08T23:19:28.948438+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:29 vm10 bash[20034]: audit 2026-03-08T23:19:28.953022+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:29 vm10 bash[20034]: audit 2026-03-08T23:19:28.953022+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:29 vm10 bash[20034]: audit 2026-03-08T23:19:28.956147+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:29 vm10 bash[20034]: audit 2026-03-08T23:19:28.956147+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:30.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:30 vm04 bash[19918]: cluster 2026-03-08T23:19:29.371726+0000 mon.a (mon.0) 459 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-08T23:19:30.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:30 vm04 bash[19918]: cluster 2026-03-08T23:19:29.371726+0000 mon.a (mon.0) 459 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-08T23:19:30.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:30 vm02 bash[17457]: cluster 2026-03-08T23:19:29.371726+0000 mon.a (mon.0) 459 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-08T23:19:30.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:30 vm02 bash[17457]: cluster 2026-03-08T23:19:29.371726+0000 mon.a (mon.0) 459 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-08T23:19:30.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:30 vm10 bash[20034]: cluster 2026-03-08T23:19:29.371726+0000 mon.a (mon.0) 459 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-08T23:19:30.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:30 vm10 bash[20034]: cluster 2026-03-08T23:19:29.371726+0000 mon.a (mon.0) 459 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-08T23:19:31.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:31 vm04 bash[19918]: cluster 2026-03-08T23:19:30.196606+0000 mgr.x (mgr.14150) 155 : cluster [DBG] pgmap v114: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:31.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:31 vm04 bash[19918]: cluster 2026-03-08T23:19:30.196606+0000 mgr.x (mgr.14150) 155 : cluster [DBG] pgmap v114: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:31 vm04 bash[19918]: cluster 2026-03-08T23:19:30.451098+0000 mon.a (mon.0) 460 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-08T23:19:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:31 vm04 bash[19918]: cluster 2026-03-08T23:19:30.451098+0000 mon.a (mon.0) 460 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-08T23:19:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:31 vm02 bash[17457]: cluster 2026-03-08T23:19:30.196606+0000 mgr.x (mgr.14150) 155 : cluster [DBG] pgmap v114: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:31 vm02 bash[17457]: cluster 2026-03-08T23:19:30.196606+0000 mgr.x (mgr.14150) 155 : cluster [DBG] pgmap v114: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:31 vm02 bash[17457]: cluster 2026-03-08T23:19:30.451098+0000 mon.a (mon.0) 460 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-08T23:19:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:31 vm02 bash[17457]: cluster 2026-03-08T23:19:30.451098+0000 mon.a (mon.0) 460 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-08T23:19:31.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:31 vm10 bash[20034]: cluster 2026-03-08T23:19:30.196606+0000 mgr.x (mgr.14150) 155 : cluster [DBG] pgmap v114: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:31.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:31 vm10 bash[20034]: cluster 2026-03-08T23:19:30.196606+0000 mgr.x (mgr.14150) 155 : cluster [DBG] pgmap v114: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:31.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:31 vm10 bash[20034]: cluster 2026-03-08T23:19:30.451098+0000 mon.a (mon.0) 460 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-08T23:19:31.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:31 vm10 bash[20034]: cluster 2026-03-08T23:19:30.451098+0000 mon.a (mon.0) 460 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-08T23:19:33.701 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.b/config 2026-03-08T23:19:33.752 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:33 vm04 bash[19918]: cluster 2026-03-08T23:19:32.196821+0000 mgr.x (mgr.14150) 156 : cluster [DBG] pgmap v116: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:33.752 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:33 vm04 bash[19918]: cluster 2026-03-08T23:19:32.196821+0000 mgr.x (mgr.14150) 156 : cluster [DBG] pgmap v116: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:33 vm02 bash[17457]: cluster 2026-03-08T23:19:32.196821+0000 mgr.x (mgr.14150) 156 : cluster [DBG] pgmap v116: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:33 vm02 bash[17457]: cluster 2026-03-08T23:19:32.196821+0000 mgr.x (mgr.14150) 156 : cluster [DBG] pgmap v116: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:33.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:33 vm10 bash[20034]: cluster 2026-03-08T23:19:32.196821+0000 mgr.x (mgr.14150) 156 : cluster [DBG] pgmap v116: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:33.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:33 vm10 bash[20034]: cluster 2026-03-08T23:19:32.196821+0000 mgr.x (mgr.14150) 156 : cluster [DBG] pgmap v116: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:34.561 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-08T23:19:34.576 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph orch daemon add osd vm04:/dev/vdc 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: cluster 2026-03-08T23:19:34.197130+0000 mgr.x (mgr.14150) 157 : cluster [DBG] pgmap v117: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: cluster 2026-03-08T23:19:34.197130+0000 mgr.x (mgr.14150) 157 : cluster [DBG] pgmap v117: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: audit 2026-03-08T23:19:35.291431+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: audit 2026-03-08T23:19:35.291431+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: audit 2026-03-08T23:19:35.295337+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: audit 2026-03-08T23:19:35.295337+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: audit 2026-03-08T23:19:35.296284+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: audit 2026-03-08T23:19:35.296284+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: audit 2026-03-08T23:19:35.296801+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: audit 2026-03-08T23:19:35.296801+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: audit 2026-03-08T23:19:35.299895+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: audit 2026-03-08T23:19:35.299895+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: audit 2026-03-08T23:19:35.301734+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: audit 2026-03-08T23:19:35.301734+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: audit 2026-03-08T23:19:35.302175+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: audit 2026-03-08T23:19:35.302175+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: audit 2026-03-08T23:19:35.305679+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:35 vm04 bash[19918]: audit 2026-03-08T23:19:35.305679+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: cluster 2026-03-08T23:19:34.197130+0000 mgr.x (mgr.14150) 157 : cluster [DBG] pgmap v117: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: cluster 2026-03-08T23:19:34.197130+0000 mgr.x (mgr.14150) 157 : cluster [DBG] pgmap v117: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: audit 2026-03-08T23:19:35.291431+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: audit 2026-03-08T23:19:35.291431+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: audit 2026-03-08T23:19:35.295337+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: audit 2026-03-08T23:19:35.295337+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: audit 2026-03-08T23:19:35.296284+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: audit 2026-03-08T23:19:35.296284+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: audit 2026-03-08T23:19:35.296801+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: audit 2026-03-08T23:19:35.296801+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: audit 2026-03-08T23:19:35.299895+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: audit 2026-03-08T23:19:35.299895+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: audit 2026-03-08T23:19:35.301734+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: audit 2026-03-08T23:19:35.301734+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: audit 2026-03-08T23:19:35.302175+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: audit 2026-03-08T23:19:35.302175+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: audit 2026-03-08T23:19:35.305679+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:35 vm02 bash[17457]: audit 2026-03-08T23:19:35.305679+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: cluster 2026-03-08T23:19:34.197130+0000 mgr.x (mgr.14150) 157 : cluster [DBG] pgmap v117: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: cluster 2026-03-08T23:19:34.197130+0000 mgr.x (mgr.14150) 157 : cluster [DBG] pgmap v117: 1 pgs: 1 unknown; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: audit 2026-03-08T23:19:35.291431+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: audit 2026-03-08T23:19:35.291431+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: audit 2026-03-08T23:19:35.295337+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: audit 2026-03-08T23:19:35.295337+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: audit 2026-03-08T23:19:35.296284+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: audit 2026-03-08T23:19:35.296284+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: audit 2026-03-08T23:19:35.296801+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: audit 2026-03-08T23:19:35.296801+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: audit 2026-03-08T23:19:35.299895+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: audit 2026-03-08T23:19:35.299895+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: audit 2026-03-08T23:19:35.301734+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: audit 2026-03-08T23:19:35.301734+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: audit 2026-03-08T23:19:35.302175+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: audit 2026-03-08T23:19:35.302175+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:19:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: audit 2026-03-08T23:19:35.305679+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:35.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:35 vm10 bash[20034]: audit 2026-03-08T23:19:35.305679+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:36.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:36 vm04 bash[19918]: cephadm 2026-03-08T23:19:35.286240+0000 mgr.x (mgr.14150) 158 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:19:36.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:36 vm04 bash[19918]: cephadm 2026-03-08T23:19:35.286240+0000 mgr.x (mgr.14150) 158 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:19:36.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:36 vm04 bash[19918]: cephadm 2026-03-08T23:19:35.297104+0000 mgr.x (mgr.14150) 159 : cephadm [INF] Adjusting osd_memory_target on vm04 to 2275M 2026-03-08T23:19:36.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:36 vm04 bash[19918]: cephadm 2026-03-08T23:19:35.297104+0000 mgr.x (mgr.14150) 159 : cephadm [INF] Adjusting osd_memory_target on vm04 to 2275M 2026-03-08T23:19:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:36 vm02 bash[17457]: cephadm 2026-03-08T23:19:35.286240+0000 mgr.x (mgr.14150) 158 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:19:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:36 vm02 bash[17457]: cephadm 2026-03-08T23:19:35.286240+0000 mgr.x (mgr.14150) 158 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:19:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:36 vm02 bash[17457]: cephadm 2026-03-08T23:19:35.297104+0000 mgr.x (mgr.14150) 159 : cephadm [INF] Adjusting osd_memory_target on vm04 to 2275M 2026-03-08T23:19:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:36 vm02 bash[17457]: cephadm 2026-03-08T23:19:35.297104+0000 mgr.x (mgr.14150) 159 : cephadm [INF] Adjusting osd_memory_target on vm04 to 2275M 2026-03-08T23:19:36.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:36 vm10 bash[20034]: cephadm 2026-03-08T23:19:35.286240+0000 mgr.x (mgr.14150) 158 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:19:36.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:36 vm10 bash[20034]: cephadm 2026-03-08T23:19:35.286240+0000 mgr.x (mgr.14150) 158 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:19:36.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:36 vm10 bash[20034]: cephadm 2026-03-08T23:19:35.297104+0000 mgr.x (mgr.14150) 159 : cephadm [INF] Adjusting osd_memory_target on vm04 to 2275M 2026-03-08T23:19:36.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:36 vm10 bash[20034]: cephadm 2026-03-08T23:19:35.297104+0000 mgr.x (mgr.14150) 159 : cephadm [INF] Adjusting osd_memory_target on vm04 to 2275M 2026-03-08T23:19:37.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:37 vm04 bash[19918]: cluster 2026-03-08T23:19:36.197410+0000 mgr.x (mgr.14150) 160 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-08T23:19:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:37 vm04 bash[19918]: cluster 2026-03-08T23:19:36.197410+0000 mgr.x (mgr.14150) 160 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-08T23:19:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:37 vm02 bash[17457]: cluster 2026-03-08T23:19:36.197410+0000 mgr.x (mgr.14150) 160 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-08T23:19:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:37 vm02 bash[17457]: cluster 2026-03-08T23:19:36.197410+0000 mgr.x (mgr.14150) 160 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-08T23:19:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:37 vm10 bash[20034]: cluster 2026-03-08T23:19:36.197410+0000 mgr.x (mgr.14150) 160 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-08T23:19:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:37 vm10 bash[20034]: cluster 2026-03-08T23:19:36.197410+0000 mgr.x (mgr.14150) 160 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 57 KiB/s, 0 objects/s recovering 2026-03-08T23:19:39.206 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.b/config 2026-03-08T23:19:39.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:39 vm04 bash[19918]: cluster 2026-03-08T23:19:38.197661+0000 mgr.x (mgr.14150) 161 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 51 KiB/s, 0 objects/s recovering 2026-03-08T23:19:39.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:39 vm04 bash[19918]: cluster 2026-03-08T23:19:38.197661+0000 mgr.x (mgr.14150) 161 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 51 KiB/s, 0 objects/s recovering 2026-03-08T23:19:39.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:39 vm04 bash[19918]: audit 2026-03-08T23:19:39.468521+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:19:39.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:39 vm04 bash[19918]: audit 2026-03-08T23:19:39.468521+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:19:39.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:39 vm04 bash[19918]: audit 2026-03-08T23:19:39.469864+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:19:39.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:39 vm04 bash[19918]: audit 2026-03-08T23:19:39.469864+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:19:39.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:39 vm04 bash[19918]: audit 2026-03-08T23:19:39.470335+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:39.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:39 vm04 bash[19918]: audit 2026-03-08T23:19:39.470335+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:39 vm02 bash[17457]: cluster 2026-03-08T23:19:38.197661+0000 mgr.x (mgr.14150) 161 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 51 KiB/s, 0 objects/s recovering 2026-03-08T23:19:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:39 vm02 bash[17457]: cluster 2026-03-08T23:19:38.197661+0000 mgr.x (mgr.14150) 161 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 51 KiB/s, 0 objects/s recovering 2026-03-08T23:19:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:39 vm02 bash[17457]: audit 2026-03-08T23:19:39.468521+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:19:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:39 vm02 bash[17457]: audit 2026-03-08T23:19:39.468521+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:19:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:39 vm02 bash[17457]: audit 2026-03-08T23:19:39.469864+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:19:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:39 vm02 bash[17457]: audit 2026-03-08T23:19:39.469864+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:19:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:39 vm02 bash[17457]: audit 2026-03-08T23:19:39.470335+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:39 vm02 bash[17457]: audit 2026-03-08T23:19:39.470335+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:39 vm10 bash[20034]: cluster 2026-03-08T23:19:38.197661+0000 mgr.x (mgr.14150) 161 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 51 KiB/s, 0 objects/s recovering 2026-03-08T23:19:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:39 vm10 bash[20034]: cluster 2026-03-08T23:19:38.197661+0000 mgr.x (mgr.14150) 161 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 51 KiB/s, 0 objects/s recovering 2026-03-08T23:19:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:39 vm10 bash[20034]: audit 2026-03-08T23:19:39.468521+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:19:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:39 vm10 bash[20034]: audit 2026-03-08T23:19:39.468521+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:19:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:39 vm10 bash[20034]: audit 2026-03-08T23:19:39.469864+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:19:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:39 vm10 bash[20034]: audit 2026-03-08T23:19:39.469864+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:19:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:39 vm10 bash[20034]: audit 2026-03-08T23:19:39.470335+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:39 vm10 bash[20034]: audit 2026-03-08T23:19:39.470335+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:40.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:40 vm04 bash[19918]: audit 2026-03-08T23:19:39.467188+0000 mgr.x (mgr.14150) 162 : audit [DBG] from='client.24190 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:19:40.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:40 vm04 bash[19918]: audit 2026-03-08T23:19:39.467188+0000 mgr.x (mgr.14150) 162 : audit [DBG] from='client.24190 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:19:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:40 vm02 bash[17457]: audit 2026-03-08T23:19:39.467188+0000 mgr.x (mgr.14150) 162 : audit [DBG] from='client.24190 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:19:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:40 vm02 bash[17457]: audit 2026-03-08T23:19:39.467188+0000 mgr.x (mgr.14150) 162 : audit [DBG] from='client.24190 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:19:40.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:40 vm10 bash[20034]: audit 2026-03-08T23:19:39.467188+0000 mgr.x (mgr.14150) 162 : audit [DBG] from='client.24190 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:19:40.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:40 vm10 bash[20034]: audit 2026-03-08T23:19:39.467188+0000 mgr.x (mgr.14150) 162 : audit [DBG] from='client.24190 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:19:41.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:41 vm04 bash[19918]: cluster 2026-03-08T23:19:40.197936+0000 mgr.x (mgr.14150) 163 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-08T23:19:41.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:41 vm04 bash[19918]: cluster 2026-03-08T23:19:40.197936+0000 mgr.x (mgr.14150) 163 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-08T23:19:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:41 vm02 bash[17457]: cluster 2026-03-08T23:19:40.197936+0000 mgr.x (mgr.14150) 163 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-08T23:19:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:41 vm02 bash[17457]: cluster 2026-03-08T23:19:40.197936+0000 mgr.x (mgr.14150) 163 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-08T23:19:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:41 vm10 bash[20034]: cluster 2026-03-08T23:19:40.197936+0000 mgr.x (mgr.14150) 163 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-08T23:19:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:41 vm10 bash[20034]: cluster 2026-03-08T23:19:40.197936+0000 mgr.x (mgr.14150) 163 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-08T23:19:43.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:43 vm04 bash[19918]: cluster 2026-03-08T23:19:42.198144+0000 mgr.x (mgr.14150) 164 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-08T23:19:43.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:43 vm04 bash[19918]: cluster 2026-03-08T23:19:42.198144+0000 mgr.x (mgr.14150) 164 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-08T23:19:43.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:43 vm02 bash[17457]: cluster 2026-03-08T23:19:42.198144+0000 mgr.x (mgr.14150) 164 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-08T23:19:43.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:43 vm02 bash[17457]: cluster 2026-03-08T23:19:42.198144+0000 mgr.x (mgr.14150) 164 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-08T23:19:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:43 vm10 bash[20034]: cluster 2026-03-08T23:19:42.198144+0000 mgr.x (mgr.14150) 164 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-08T23:19:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:43 vm10 bash[20034]: cluster 2026-03-08T23:19:42.198144+0000 mgr.x (mgr.14150) 164 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-08T23:19:45.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:45 vm04 bash[19918]: cluster 2026-03-08T23:19:44.198434+0000 mgr.x (mgr.14150) 165 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:19:45.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:45 vm04 bash[19918]: cluster 2026-03-08T23:19:44.198434+0000 mgr.x (mgr.14150) 165 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:19:45.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:45 vm04 bash[19918]: audit 2026-03-08T23:19:44.819042+0000 mon.b (mon.2) 14 : audit [INF] from='client.? 192.168.123.104:0/2196410747' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]: dispatch 2026-03-08T23:19:45.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:45 vm04 bash[19918]: audit 2026-03-08T23:19:44.819042+0000 mon.b (mon.2) 14 : audit [INF] from='client.? 192.168.123.104:0/2196410747' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]: dispatch 2026-03-08T23:19:45.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:45 vm04 bash[19918]: audit 2026-03-08T23:19:44.819740+0000 mon.a (mon.0) 472 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]: dispatch 2026-03-08T23:19:45.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:45 vm04 bash[19918]: audit 2026-03-08T23:19:44.819740+0000 mon.a (mon.0) 472 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]: dispatch 2026-03-08T23:19:45.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:45 vm04 bash[19918]: audit 2026-03-08T23:19:44.826909+0000 mon.a (mon.0) 473 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]': finished 2026-03-08T23:19:45.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:45 vm04 bash[19918]: audit 2026-03-08T23:19:44.826909+0000 mon.a (mon.0) 473 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]': finished 2026-03-08T23:19:45.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:45 vm04 bash[19918]: cluster 2026-03-08T23:19:44.832907+0000 mon.a (mon.0) 474 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-08T23:19:45.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:45 vm04 bash[19918]: cluster 2026-03-08T23:19:44.832907+0000 mon.a (mon.0) 474 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-08T23:19:45.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:45 vm04 bash[19918]: audit 2026-03-08T23:19:44.833008+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:19:45.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:45 vm04 bash[19918]: audit 2026-03-08T23:19:44.833008+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:19:45.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:45 vm04 bash[19918]: audit 2026-03-08T23:19:45.463983+0000 mon.c (mon.1) 6 : audit [DBG] from='client.? 192.168.123.104:0/2285914016' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:19:45.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:45 vm04 bash[19918]: audit 2026-03-08T23:19:45.463983+0000 mon.c (mon.1) 6 : audit [DBG] from='client.? 192.168.123.104:0/2285914016' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:19:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:45 vm02 bash[17457]: cluster 2026-03-08T23:19:44.198434+0000 mgr.x (mgr.14150) 165 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:19:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:45 vm02 bash[17457]: cluster 2026-03-08T23:19:44.198434+0000 mgr.x (mgr.14150) 165 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:19:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:45 vm02 bash[17457]: audit 2026-03-08T23:19:44.819042+0000 mon.b (mon.2) 14 : audit [INF] from='client.? 192.168.123.104:0/2196410747' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]: dispatch 2026-03-08T23:19:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:45 vm02 bash[17457]: audit 2026-03-08T23:19:44.819042+0000 mon.b (mon.2) 14 : audit [INF] from='client.? 192.168.123.104:0/2196410747' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]: dispatch 2026-03-08T23:19:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:45 vm02 bash[17457]: audit 2026-03-08T23:19:44.819740+0000 mon.a (mon.0) 472 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]: dispatch 2026-03-08T23:19:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:45 vm02 bash[17457]: audit 2026-03-08T23:19:44.819740+0000 mon.a (mon.0) 472 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]: dispatch 2026-03-08T23:19:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:45 vm02 bash[17457]: audit 2026-03-08T23:19:44.826909+0000 mon.a (mon.0) 473 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]': finished 2026-03-08T23:19:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:45 vm02 bash[17457]: audit 2026-03-08T23:19:44.826909+0000 mon.a (mon.0) 473 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]': finished 2026-03-08T23:19:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:45 vm02 bash[17457]: cluster 2026-03-08T23:19:44.832907+0000 mon.a (mon.0) 474 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-08T23:19:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:45 vm02 bash[17457]: cluster 2026-03-08T23:19:44.832907+0000 mon.a (mon.0) 474 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-08T23:19:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:45 vm02 bash[17457]: audit 2026-03-08T23:19:44.833008+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:19:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:45 vm02 bash[17457]: audit 2026-03-08T23:19:44.833008+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:19:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:45 vm02 bash[17457]: audit 2026-03-08T23:19:45.463983+0000 mon.c (mon.1) 6 : audit [DBG] from='client.? 192.168.123.104:0/2285914016' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:19:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:45 vm02 bash[17457]: audit 2026-03-08T23:19:45.463983+0000 mon.c (mon.1) 6 : audit [DBG] from='client.? 192.168.123.104:0/2285914016' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:19:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:45 vm10 bash[20034]: cluster 2026-03-08T23:19:44.198434+0000 mgr.x (mgr.14150) 165 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:19:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:45 vm10 bash[20034]: cluster 2026-03-08T23:19:44.198434+0000 mgr.x (mgr.14150) 165 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:19:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:45 vm10 bash[20034]: audit 2026-03-08T23:19:44.819042+0000 mon.b (mon.2) 14 : audit [INF] from='client.? 192.168.123.104:0/2196410747' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]: dispatch 2026-03-08T23:19:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:45 vm10 bash[20034]: audit 2026-03-08T23:19:44.819042+0000 mon.b (mon.2) 14 : audit [INF] from='client.? 192.168.123.104:0/2196410747' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]: dispatch 2026-03-08T23:19:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:45 vm10 bash[20034]: audit 2026-03-08T23:19:44.819740+0000 mon.a (mon.0) 472 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]: dispatch 2026-03-08T23:19:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:45 vm10 bash[20034]: audit 2026-03-08T23:19:44.819740+0000 mon.a (mon.0) 472 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]: dispatch 2026-03-08T23:19:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:45 vm10 bash[20034]: audit 2026-03-08T23:19:44.826909+0000 mon.a (mon.0) 473 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]': finished 2026-03-08T23:19:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:45 vm10 bash[20034]: audit 2026-03-08T23:19:44.826909+0000 mon.a (mon.0) 473 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bfc224db-b68a-4579-b006-40bea8da3848"}]': finished 2026-03-08T23:19:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:45 vm10 bash[20034]: cluster 2026-03-08T23:19:44.832907+0000 mon.a (mon.0) 474 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-08T23:19:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:45 vm10 bash[20034]: cluster 2026-03-08T23:19:44.832907+0000 mon.a (mon.0) 474 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-08T23:19:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:45 vm10 bash[20034]: audit 2026-03-08T23:19:44.833008+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:19:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:45 vm10 bash[20034]: audit 2026-03-08T23:19:44.833008+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:19:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:45 vm10 bash[20034]: audit 2026-03-08T23:19:45.463983+0000 mon.c (mon.1) 6 : audit [DBG] from='client.? 192.168.123.104:0/2285914016' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:19:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:45 vm10 bash[20034]: audit 2026-03-08T23:19:45.463983+0000 mon.c (mon.1) 6 : audit [DBG] from='client.? 192.168.123.104:0/2285914016' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:19:47.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:47 vm04 bash[19918]: cluster 2026-03-08T23:19:46.198720+0000 mgr.x (mgr.14150) 166 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:47.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:47 vm04 bash[19918]: cluster 2026-03-08T23:19:46.198720+0000 mgr.x (mgr.14150) 166 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:47 vm02 bash[17457]: cluster 2026-03-08T23:19:46.198720+0000 mgr.x (mgr.14150) 166 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:47 vm02 bash[17457]: cluster 2026-03-08T23:19:46.198720+0000 mgr.x (mgr.14150) 166 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:47 vm10 bash[20034]: cluster 2026-03-08T23:19:46.198720+0000 mgr.x (mgr.14150) 166 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:47 vm10 bash[20034]: cluster 2026-03-08T23:19:46.198720+0000 mgr.x (mgr.14150) 166 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:49.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:49 vm04 bash[19918]: cluster 2026-03-08T23:19:48.199018+0000 mgr.x (mgr.14150) 167 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:49.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:49 vm04 bash[19918]: cluster 2026-03-08T23:19:48.199018+0000 mgr.x (mgr.14150) 167 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:49 vm02 bash[17457]: cluster 2026-03-08T23:19:48.199018+0000 mgr.x (mgr.14150) 167 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:49 vm02 bash[17457]: cluster 2026-03-08T23:19:48.199018+0000 mgr.x (mgr.14150) 167 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:49 vm10 bash[20034]: cluster 2026-03-08T23:19:48.199018+0000 mgr.x (mgr.14150) 167 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:49 vm10 bash[20034]: cluster 2026-03-08T23:19:48.199018+0000 mgr.x (mgr.14150) 167 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:51.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:51 vm04 bash[19918]: cluster 2026-03-08T23:19:50.199281+0000 mgr.x (mgr.14150) 168 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:51.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:51 vm04 bash[19918]: cluster 2026-03-08T23:19:50.199281+0000 mgr.x (mgr.14150) 168 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:51.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:51 vm02 bash[17457]: cluster 2026-03-08T23:19:50.199281+0000 mgr.x (mgr.14150) 168 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:51.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:51 vm02 bash[17457]: cluster 2026-03-08T23:19:50.199281+0000 mgr.x (mgr.14150) 168 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:51.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:51 vm10 bash[20034]: cluster 2026-03-08T23:19:50.199281+0000 mgr.x (mgr.14150) 168 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:51.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:51 vm10 bash[20034]: cluster 2026-03-08T23:19:50.199281+0000 mgr.x (mgr.14150) 168 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:53.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:53 vm04 bash[19918]: cluster 2026-03-08T23:19:52.199581+0000 mgr.x (mgr.14150) 169 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:53.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:53 vm04 bash[19918]: cluster 2026-03-08T23:19:52.199581+0000 mgr.x (mgr.14150) 169 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:53 vm02 bash[17457]: cluster 2026-03-08T23:19:52.199581+0000 mgr.x (mgr.14150) 169 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:53 vm02 bash[17457]: cluster 2026-03-08T23:19:52.199581+0000 mgr.x (mgr.14150) 169 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:53.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:53 vm10 bash[20034]: cluster 2026-03-08T23:19:52.199581+0000 mgr.x (mgr.14150) 169 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:53.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:53 vm10 bash[20034]: cluster 2026-03-08T23:19:52.199581+0000 mgr.x (mgr.14150) 169 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:54.427 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:19:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:19:54.427 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:19:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:19:54.427 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:19:54.734 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:54 vm04 bash[19918]: audit 2026-03-08T23:19:53.625763+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-08T23:19:54.735 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:54 vm04 bash[19918]: audit 2026-03-08T23:19:53.625763+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-08T23:19:54.735 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:54 vm04 bash[19918]: audit 2026-03-08T23:19:53.626212+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:54.735 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:54 vm04 bash[19918]: audit 2026-03-08T23:19:53.626212+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:54.735 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:54 vm04 bash[19918]: cephadm 2026-03-08T23:19:53.626565+0000 mgr.x (mgr.14150) 170 : cephadm [INF] Deploying daemon osd.4 on vm04 2026-03-08T23:19:54.735 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:54 vm04 bash[19918]: cephadm 2026-03-08T23:19:53.626565+0000 mgr.x (mgr.14150) 170 : cephadm [INF] Deploying daemon osd.4 on vm04 2026-03-08T23:19:54.735 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:19:54.735 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:19:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:19:54.735 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:19:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:19:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:54 vm02 bash[17457]: audit 2026-03-08T23:19:53.625763+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-08T23:19:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:54 vm02 bash[17457]: audit 2026-03-08T23:19:53.625763+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-08T23:19:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:54 vm02 bash[17457]: audit 2026-03-08T23:19:53.626212+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:54 vm02 bash[17457]: audit 2026-03-08T23:19:53.626212+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:54 vm02 bash[17457]: cephadm 2026-03-08T23:19:53.626565+0000 mgr.x (mgr.14150) 170 : cephadm [INF] Deploying daemon osd.4 on vm04 2026-03-08T23:19:54.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:54 vm02 bash[17457]: cephadm 2026-03-08T23:19:53.626565+0000 mgr.x (mgr.14150) 170 : cephadm [INF] Deploying daemon osd.4 on vm04 2026-03-08T23:19:54.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:54 vm10 bash[20034]: audit 2026-03-08T23:19:53.625763+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-08T23:19:54.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:54 vm10 bash[20034]: audit 2026-03-08T23:19:53.625763+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-08T23:19:54.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:54 vm10 bash[20034]: audit 2026-03-08T23:19:53.626212+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:54.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:54 vm10 bash[20034]: audit 2026-03-08T23:19:53.626212+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:19:54.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:54 vm10 bash[20034]: cephadm 2026-03-08T23:19:53.626565+0000 mgr.x (mgr.14150) 170 : cephadm [INF] Deploying daemon osd.4 on vm04 2026-03-08T23:19:54.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:54 vm10 bash[20034]: cephadm 2026-03-08T23:19:53.626565+0000 mgr.x (mgr.14150) 170 : cephadm [INF] Deploying daemon osd.4 on vm04 2026-03-08T23:19:55.845 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:55 vm04 bash[19918]: cluster 2026-03-08T23:19:54.199860+0000 mgr.x (mgr.14150) 171 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:55.845 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:55 vm04 bash[19918]: cluster 2026-03-08T23:19:54.199860+0000 mgr.x (mgr.14150) 171 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:55.845 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:55 vm04 bash[19918]: audit 2026-03-08T23:19:54.672502+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:55.846 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:55 vm04 bash[19918]: audit 2026-03-08T23:19:54.672502+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:55.846 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:55 vm04 bash[19918]: audit 2026-03-08T23:19:54.680487+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:55.846 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:55 vm04 bash[19918]: audit 2026-03-08T23:19:54.680487+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:55.846 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:55 vm04 bash[19918]: audit 2026-03-08T23:19:54.685065+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:55.846 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:55 vm04 bash[19918]: audit 2026-03-08T23:19:54.685065+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:55.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:55 vm02 bash[17457]: cluster 2026-03-08T23:19:54.199860+0000 mgr.x (mgr.14150) 171 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:55.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:55 vm02 bash[17457]: cluster 2026-03-08T23:19:54.199860+0000 mgr.x (mgr.14150) 171 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:55.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:55 vm02 bash[17457]: audit 2026-03-08T23:19:54.672502+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:55.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:55 vm02 bash[17457]: audit 2026-03-08T23:19:54.672502+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:55.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:55 vm02 bash[17457]: audit 2026-03-08T23:19:54.680487+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:55.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:55 vm02 bash[17457]: audit 2026-03-08T23:19:54.680487+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:55.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:55 vm02 bash[17457]: audit 2026-03-08T23:19:54.685065+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:55.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:55 vm02 bash[17457]: audit 2026-03-08T23:19:54.685065+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:55.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:55 vm10 bash[20034]: cluster 2026-03-08T23:19:54.199860+0000 mgr.x (mgr.14150) 171 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:55.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:55 vm10 bash[20034]: cluster 2026-03-08T23:19:54.199860+0000 mgr.x (mgr.14150) 171 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:55.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:55 vm10 bash[20034]: audit 2026-03-08T23:19:54.672502+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:55.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:55 vm10 bash[20034]: audit 2026-03-08T23:19:54.672502+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:19:55.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:55 vm10 bash[20034]: audit 2026-03-08T23:19:54.680487+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:55.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:55 vm10 bash[20034]: audit 2026-03-08T23:19:54.680487+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:55.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:55 vm10 bash[20034]: audit 2026-03-08T23:19:54.685065+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:55.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:55 vm10 bash[20034]: audit 2026-03-08T23:19:54.685065+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:19:58.016 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:57 vm04 bash[19918]: cluster 2026-03-08T23:19:56.200118+0000 mgr.x (mgr.14150) 172 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:58.016 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:57 vm04 bash[19918]: cluster 2026-03-08T23:19:56.200118+0000 mgr.x (mgr.14150) 172 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:57 vm02 bash[17457]: cluster 2026-03-08T23:19:56.200118+0000 mgr.x (mgr.14150) 172 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:57 vm02 bash[17457]: cluster 2026-03-08T23:19:56.200118+0000 mgr.x (mgr.14150) 172 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:58.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:57 vm10 bash[20034]: cluster 2026-03-08T23:19:56.200118+0000 mgr.x (mgr.14150) 172 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:58.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:57 vm10 bash[20034]: cluster 2026-03-08T23:19:56.200118+0000 mgr.x (mgr.14150) 172 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:19:59.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:58 vm04 bash[19918]: audit 2026-03-08T23:19:58.020565+0000 mon.b (mon.2) 15 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:19:59.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:58 vm04 bash[19918]: audit 2026-03-08T23:19:58.020565+0000 mon.b (mon.2) 15 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:19:59.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:58 vm04 bash[19918]: audit 2026-03-08T23:19:58.021215+0000 mon.a (mon.0) 481 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:19:59.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:58 vm04 bash[19918]: audit 2026-03-08T23:19:58.021215+0000 mon.a (mon.0) 481 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:19:59.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:58 vm02 bash[17457]: audit 2026-03-08T23:19:58.020565+0000 mon.b (mon.2) 15 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:19:59.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:58 vm02 bash[17457]: audit 2026-03-08T23:19:58.020565+0000 mon.b (mon.2) 15 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:19:59.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:58 vm02 bash[17457]: audit 2026-03-08T23:19:58.021215+0000 mon.a (mon.0) 481 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:19:59.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:58 vm02 bash[17457]: audit 2026-03-08T23:19:58.021215+0000 mon.a (mon.0) 481 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:19:59.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:58 vm10 bash[20034]: audit 2026-03-08T23:19:58.020565+0000 mon.b (mon.2) 15 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:19:59.160 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:58 vm10 bash[20034]: audit 2026-03-08T23:19:58.020565+0000 mon.b (mon.2) 15 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:19:59.160 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:58 vm10 bash[20034]: audit 2026-03-08T23:19:58.021215+0000 mon.a (mon.0) 481 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:19:59.160 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:58 vm10 bash[20034]: audit 2026-03-08T23:19:58.021215+0000 mon.a (mon.0) 481 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:20:00.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:59 vm04 bash[19918]: cluster 2026-03-08T23:19:58.200347+0000 mgr.x (mgr.14150) 173 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:20:00.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:59 vm04 bash[19918]: cluster 2026-03-08T23:19:58.200347+0000 mgr.x (mgr.14150) 173 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:20:00.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:59 vm04 bash[19918]: audit 2026-03-08T23:19:58.689820+0000 mon.a (mon.0) 482 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-08T23:20:00.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:59 vm04 bash[19918]: audit 2026-03-08T23:19:58.689820+0000 mon.a (mon.0) 482 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-08T23:20:00.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:59 vm04 bash[19918]: cluster 2026-03-08T23:19:58.691426+0000 mon.a (mon.0) 483 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-08T23:20:00.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:59 vm04 bash[19918]: cluster 2026-03-08T23:19:58.691426+0000 mon.a (mon.0) 483 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-08T23:20:00.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:59 vm04 bash[19918]: audit 2026-03-08T23:19:58.691594+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:00.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:59 vm04 bash[19918]: audit 2026-03-08T23:19:58.691594+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:00.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:59 vm04 bash[19918]: audit 2026-03-08T23:19:58.699837+0000 mon.b (mon.2) 16 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:20:00.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:59 vm04 bash[19918]: audit 2026-03-08T23:19:58.699837+0000 mon.b (mon.2) 16 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:20:00.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:59 vm04 bash[19918]: audit 2026-03-08T23:19:58.700436+0000 mon.a (mon.0) 485 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:20:00.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:19:59 vm04 bash[19918]: audit 2026-03-08T23:19:58.700436+0000 mon.a (mon.0) 485 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:20:00.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:59 vm02 bash[17457]: cluster 2026-03-08T23:19:58.200347+0000 mgr.x (mgr.14150) 173 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:20:00.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:59 vm02 bash[17457]: cluster 2026-03-08T23:19:58.200347+0000 mgr.x (mgr.14150) 173 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:20:00.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:59 vm02 bash[17457]: audit 2026-03-08T23:19:58.689820+0000 mon.a (mon.0) 482 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-08T23:20:00.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:59 vm02 bash[17457]: audit 2026-03-08T23:19:58.689820+0000 mon.a (mon.0) 482 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-08T23:20:00.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:59 vm02 bash[17457]: cluster 2026-03-08T23:19:58.691426+0000 mon.a (mon.0) 483 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-08T23:20:00.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:59 vm02 bash[17457]: cluster 2026-03-08T23:19:58.691426+0000 mon.a (mon.0) 483 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-08T23:20:00.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:59 vm02 bash[17457]: audit 2026-03-08T23:19:58.691594+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:00.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:59 vm02 bash[17457]: audit 2026-03-08T23:19:58.691594+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:00.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:59 vm02 bash[17457]: audit 2026-03-08T23:19:58.699837+0000 mon.b (mon.2) 16 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:20:00.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:59 vm02 bash[17457]: audit 2026-03-08T23:19:58.699837+0000 mon.b (mon.2) 16 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:20:00.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:59 vm02 bash[17457]: audit 2026-03-08T23:19:58.700436+0000 mon.a (mon.0) 485 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:20:00.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:19:59 vm02 bash[17457]: audit 2026-03-08T23:19:58.700436+0000 mon.a (mon.0) 485 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:20:00.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:59 vm10 bash[20034]: cluster 2026-03-08T23:19:58.200347+0000 mgr.x (mgr.14150) 173 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:20:00.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:59 vm10 bash[20034]: cluster 2026-03-08T23:19:58.200347+0000 mgr.x (mgr.14150) 173 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:20:00.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:59 vm10 bash[20034]: audit 2026-03-08T23:19:58.689820+0000 mon.a (mon.0) 482 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-08T23:20:00.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:59 vm10 bash[20034]: audit 2026-03-08T23:19:58.689820+0000 mon.a (mon.0) 482 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-08T23:20:00.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:59 vm10 bash[20034]: cluster 2026-03-08T23:19:58.691426+0000 mon.a (mon.0) 483 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-08T23:20:00.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:59 vm10 bash[20034]: cluster 2026-03-08T23:19:58.691426+0000 mon.a (mon.0) 483 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-08T23:20:00.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:59 vm10 bash[20034]: audit 2026-03-08T23:19:58.691594+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:00.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:59 vm10 bash[20034]: audit 2026-03-08T23:19:58.691594+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:00.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:59 vm10 bash[20034]: audit 2026-03-08T23:19:58.699837+0000 mon.b (mon.2) 16 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:20:00.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:59 vm10 bash[20034]: audit 2026-03-08T23:19:58.699837+0000 mon.b (mon.2) 16 : audit [INF] from='osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:20:00.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:59 vm10 bash[20034]: audit 2026-03-08T23:19:58.700436+0000 mon.a (mon.0) 485 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:20:00.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:19:59 vm10 bash[20034]: audit 2026-03-08T23:19:58.700436+0000 mon.a (mon.0) 485 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-08T23:20:00.876 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:00 vm04 bash[19918]: audit 2026-03-08T23:19:59.696095+0000 mon.a (mon.0) 486 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:20:00.876 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:00 vm04 bash[19918]: audit 2026-03-08T23:19:59.696095+0000 mon.a (mon.0) 486 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:20:00.876 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:00 vm04 bash[19918]: cluster 2026-03-08T23:19:59.698568+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-08T23:20:00.876 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:00 vm04 bash[19918]: cluster 2026-03-08T23:19:59.698568+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-08T23:20:00.876 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:00 vm04 bash[19918]: audit 2026-03-08T23:19:59.699282+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:00.876 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:00 vm04 bash[19918]: audit 2026-03-08T23:19:59.699282+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:00.876 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:00 vm04 bash[19918]: audit 2026-03-08T23:19:59.718958+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:00.876 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:00 vm04 bash[19918]: audit 2026-03-08T23:19:59.718958+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:00.876 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:00 vm04 bash[19918]: cluster 2026-03-08T23:20:00.000087+0000 mon.a (mon.0) 490 : cluster [INF] overall HEALTH_OK 2026-03-08T23:20:00.876 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:00 vm04 bash[19918]: cluster 2026-03-08T23:20:00.000087+0000 mon.a (mon.0) 490 : cluster [INF] overall HEALTH_OK 2026-03-08T23:20:00.876 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:00 vm04 bash[19918]: audit 2026-03-08T23:20:00.695892+0000 mon.a (mon.0) 491 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-08T23:20:00.876 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:00 vm04 bash[19918]: audit 2026-03-08T23:20:00.695892+0000 mon.a (mon.0) 491 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-08T23:20:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:00 vm02 bash[17457]: audit 2026-03-08T23:19:59.696095+0000 mon.a (mon.0) 486 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:20:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:00 vm02 bash[17457]: audit 2026-03-08T23:19:59.696095+0000 mon.a (mon.0) 486 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:20:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:00 vm02 bash[17457]: cluster 2026-03-08T23:19:59.698568+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-08T23:20:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:00 vm02 bash[17457]: cluster 2026-03-08T23:19:59.698568+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-08T23:20:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:00 vm02 bash[17457]: audit 2026-03-08T23:19:59.699282+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:00 vm02 bash[17457]: audit 2026-03-08T23:19:59.699282+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:00 vm02 bash[17457]: audit 2026-03-08T23:19:59.718958+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:00 vm02 bash[17457]: audit 2026-03-08T23:19:59.718958+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:00 vm02 bash[17457]: cluster 2026-03-08T23:20:00.000087+0000 mon.a (mon.0) 490 : cluster [INF] overall HEALTH_OK 2026-03-08T23:20:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:00 vm02 bash[17457]: cluster 2026-03-08T23:20:00.000087+0000 mon.a (mon.0) 490 : cluster [INF] overall HEALTH_OK 2026-03-08T23:20:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:00 vm02 bash[17457]: audit 2026-03-08T23:20:00.695892+0000 mon.a (mon.0) 491 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-08T23:20:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:00 vm02 bash[17457]: audit 2026-03-08T23:20:00.695892+0000 mon.a (mon.0) 491 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-08T23:20:01.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:00 vm10 bash[20034]: audit 2026-03-08T23:19:59.696095+0000 mon.a (mon.0) 486 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:20:01.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:00 vm10 bash[20034]: audit 2026-03-08T23:19:59.696095+0000 mon.a (mon.0) 486 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-08T23:20:01.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:00 vm10 bash[20034]: cluster 2026-03-08T23:19:59.698568+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-08T23:20:01.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:00 vm10 bash[20034]: cluster 2026-03-08T23:19:59.698568+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-08T23:20:01.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:00 vm10 bash[20034]: audit 2026-03-08T23:19:59.699282+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:01.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:00 vm10 bash[20034]: audit 2026-03-08T23:19:59.699282+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:01.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:00 vm10 bash[20034]: audit 2026-03-08T23:19:59.718958+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:01.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:00 vm10 bash[20034]: audit 2026-03-08T23:19:59.718958+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:01.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:00 vm10 bash[20034]: cluster 2026-03-08T23:20:00.000087+0000 mon.a (mon.0) 490 : cluster [INF] overall HEALTH_OK 2026-03-08T23:20:01.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:00 vm10 bash[20034]: cluster 2026-03-08T23:20:00.000087+0000 mon.a (mon.0) 490 : cluster [INF] overall HEALTH_OK 2026-03-08T23:20:01.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:00 vm10 bash[20034]: audit 2026-03-08T23:20:00.695892+0000 mon.a (mon.0) 491 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-08T23:20:01.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:00 vm10 bash[20034]: audit 2026-03-08T23:20:00.695892+0000 mon.a (mon.0) 491 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: cluster 2026-03-08T23:19:59.033671+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: cluster 2026-03-08T23:19:59.033671+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: cluster 2026-03-08T23:19:59.033734+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: cluster 2026-03-08T23:19:59.033734+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: cluster 2026-03-08T23:20:00.200544+0000 mgr.x (mgr.14150) 174 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: cluster 2026-03-08T23:20:00.200544+0000 mgr.x (mgr.14150) 174 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: audit 2026-03-08T23:20:00.720306+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: audit 2026-03-08T23:20:00.720306+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: audit 2026-03-08T23:20:00.860147+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: audit 2026-03-08T23:20:00.860147+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: audit 2026-03-08T23:20:00.864472+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: audit 2026-03-08T23:20:00.864472+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: audit 2026-03-08T23:20:01.254398+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: audit 2026-03-08T23:20:01.254398+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: audit 2026-03-08T23:20:01.254958+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: audit 2026-03-08T23:20:01.254958+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: audit 2026-03-08T23:20:01.260914+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:01 vm04 bash[19918]: audit 2026-03-08T23:20:01.260914+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:01.963 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 4 on host 'vm04' 2026-03-08T23:20:02.040 DEBUG:teuthology.orchestra.run.vm04:osd.4> sudo journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.4.service 2026-03-08T23:20:02.041 INFO:tasks.cephadm:Deploying osd.5 on vm10 with /dev/vde... 2026-03-08T23:20:02.041 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- lvm zap /dev/vde 2026-03-08T23:20:02.047 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: cluster 2026-03-08T23:19:59.033671+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:20:02.047 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: cluster 2026-03-08T23:19:59.033671+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:20:02.047 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: cluster 2026-03-08T23:19:59.033734+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:20:02.047 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: cluster 2026-03-08T23:19:59.033734+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:20:02.047 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: cluster 2026-03-08T23:20:00.200544+0000 mgr.x (mgr.14150) 174 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:20:02.048 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: cluster 2026-03-08T23:20:00.200544+0000 mgr.x (mgr.14150) 174 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:20:02.048 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: audit 2026-03-08T23:20:00.720306+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:02.048 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: audit 2026-03-08T23:20:00.720306+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:02.048 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: audit 2026-03-08T23:20:00.860147+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:02.048 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: audit 2026-03-08T23:20:00.860147+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:02.048 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: audit 2026-03-08T23:20:00.864472+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:02.048 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: audit 2026-03-08T23:20:00.864472+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:02.048 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: audit 2026-03-08T23:20:01.254398+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:02.048 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: audit 2026-03-08T23:20:01.254398+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:02.048 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: audit 2026-03-08T23:20:01.254958+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:02.048 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: audit 2026-03-08T23:20:01.254958+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:02.048 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: audit 2026-03-08T23:20:01.260914+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:02.048 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:01 vm10 bash[20034]: audit 2026-03-08T23:20:01.260914+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: cluster 2026-03-08T23:19:59.033671+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:20:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: cluster 2026-03-08T23:19:59.033671+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:20:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: cluster 2026-03-08T23:19:59.033734+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:20:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: cluster 2026-03-08T23:19:59.033734+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:20:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: cluster 2026-03-08T23:20:00.200544+0000 mgr.x (mgr.14150) 174 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:20:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: cluster 2026-03-08T23:20:00.200544+0000 mgr.x (mgr.14150) 174 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:20:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: audit 2026-03-08T23:20:00.720306+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: audit 2026-03-08T23:20:00.720306+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: audit 2026-03-08T23:20:00.860147+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: audit 2026-03-08T23:20:00.860147+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:02.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: audit 2026-03-08T23:20:00.864472+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:02.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: audit 2026-03-08T23:20:00.864472+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:02.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: audit 2026-03-08T23:20:01.254398+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:02.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: audit 2026-03-08T23:20:01.254398+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:02.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: audit 2026-03-08T23:20:01.254958+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:02.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: audit 2026-03-08T23:20:01.254958+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:02.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: audit 2026-03-08T23:20:01.260914+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:02.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:01 vm02 bash[17457]: audit 2026-03-08T23:20:01.260914+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:02 vm04 bash[19918]: cluster 2026-03-08T23:20:01.719848+0000 mon.a (mon.0) 498 : cluster [INF] osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940] boot 2026-03-08T23:20:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:02 vm04 bash[19918]: cluster 2026-03-08T23:20:01.719848+0000 mon.a (mon.0) 498 : cluster [INF] osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940] boot 2026-03-08T23:20:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:02 vm04 bash[19918]: cluster 2026-03-08T23:20:01.720017+0000 mon.a (mon.0) 499 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-08T23:20:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:02 vm04 bash[19918]: cluster 2026-03-08T23:20:01.720017+0000 mon.a (mon.0) 499 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-08T23:20:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:02 vm04 bash[19918]: audit 2026-03-08T23:20:01.720985+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:02 vm04 bash[19918]: audit 2026-03-08T23:20:01.720985+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:02 vm04 bash[19918]: audit 2026-03-08T23:20:01.951688+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:02 vm04 bash[19918]: audit 2026-03-08T23:20:01.951688+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:02 vm04 bash[19918]: audit 2026-03-08T23:20:01.955560+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:02 vm04 bash[19918]: audit 2026-03-08T23:20:01.955560+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:02 vm04 bash[19918]: audit 2026-03-08T23:20:01.959671+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:02 vm04 bash[19918]: audit 2026-03-08T23:20:01.959671+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:02 vm02 bash[17457]: cluster 2026-03-08T23:20:01.719848+0000 mon.a (mon.0) 498 : cluster [INF] osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940] boot 2026-03-08T23:20:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:02 vm02 bash[17457]: cluster 2026-03-08T23:20:01.719848+0000 mon.a (mon.0) 498 : cluster [INF] osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940] boot 2026-03-08T23:20:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:02 vm02 bash[17457]: cluster 2026-03-08T23:20:01.720017+0000 mon.a (mon.0) 499 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-08T23:20:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:02 vm02 bash[17457]: cluster 2026-03-08T23:20:01.720017+0000 mon.a (mon.0) 499 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-08T23:20:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:02 vm02 bash[17457]: audit 2026-03-08T23:20:01.720985+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:02 vm02 bash[17457]: audit 2026-03-08T23:20:01.720985+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:02 vm02 bash[17457]: audit 2026-03-08T23:20:01.951688+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:02 vm02 bash[17457]: audit 2026-03-08T23:20:01.951688+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:02 vm02 bash[17457]: audit 2026-03-08T23:20:01.955560+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:02 vm02 bash[17457]: audit 2026-03-08T23:20:01.955560+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:02 vm02 bash[17457]: audit 2026-03-08T23:20:01.959671+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:02 vm02 bash[17457]: audit 2026-03-08T23:20:01.959671+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:03.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:02 vm10 bash[20034]: cluster 2026-03-08T23:20:01.719848+0000 mon.a (mon.0) 498 : cluster [INF] osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940] boot 2026-03-08T23:20:03.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:02 vm10 bash[20034]: cluster 2026-03-08T23:20:01.719848+0000 mon.a (mon.0) 498 : cluster [INF] osd.4 [v2:192.168.123.104:6816/3877212940,v1:192.168.123.104:6817/3877212940] boot 2026-03-08T23:20:03.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:02 vm10 bash[20034]: cluster 2026-03-08T23:20:01.720017+0000 mon.a (mon.0) 499 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-08T23:20:03.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:02 vm10 bash[20034]: cluster 2026-03-08T23:20:01.720017+0000 mon.a (mon.0) 499 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-08T23:20:03.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:02 vm10 bash[20034]: audit 2026-03-08T23:20:01.720985+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:03.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:02 vm10 bash[20034]: audit 2026-03-08T23:20:01.720985+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:20:03.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:02 vm10 bash[20034]: audit 2026-03-08T23:20:01.951688+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:03.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:02 vm10 bash[20034]: audit 2026-03-08T23:20:01.951688+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:03.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:02 vm10 bash[20034]: audit 2026-03-08T23:20:01.955560+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:03.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:02 vm10 bash[20034]: audit 2026-03-08T23:20:01.955560+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:03.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:02 vm10 bash[20034]: audit 2026-03-08T23:20:01.959671+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:03.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:02 vm10 bash[20034]: audit 2026-03-08T23:20:01.959671+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:04.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:03 vm04 bash[19918]: cluster 2026-03-08T23:20:02.200826+0000 mgr.x (mgr.14150) 175 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:04.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:03 vm04 bash[19918]: cluster 2026-03-08T23:20:02.200826+0000 mgr.x (mgr.14150) 175 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:04.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:03 vm04 bash[19918]: cluster 2026-03-08T23:20:02.966951+0000 mon.a (mon.0) 504 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-08T23:20:04.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:03 vm04 bash[19918]: cluster 2026-03-08T23:20:02.966951+0000 mon.a (mon.0) 504 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-08T23:20:04.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:03 vm02 bash[17457]: cluster 2026-03-08T23:20:02.200826+0000 mgr.x (mgr.14150) 175 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:04.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:03 vm02 bash[17457]: cluster 2026-03-08T23:20:02.200826+0000 mgr.x (mgr.14150) 175 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:04.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:03 vm02 bash[17457]: cluster 2026-03-08T23:20:02.966951+0000 mon.a (mon.0) 504 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-08T23:20:04.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:03 vm02 bash[17457]: cluster 2026-03-08T23:20:02.966951+0000 mon.a (mon.0) 504 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-08T23:20:04.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:03 vm10 bash[20034]: cluster 2026-03-08T23:20:02.200826+0000 mgr.x (mgr.14150) 175 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:04.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:03 vm10 bash[20034]: cluster 2026-03-08T23:20:02.200826+0000 mgr.x (mgr.14150) 175 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:04.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:03 vm10 bash[20034]: cluster 2026-03-08T23:20:02.966951+0000 mon.a (mon.0) 504 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-08T23:20:04.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:03 vm10 bash[20034]: cluster 2026-03-08T23:20:02.966951+0000 mon.a (mon.0) 504 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-08T23:20:05.650 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.c/config 2026-03-08T23:20:06.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:05 vm04 bash[19918]: cluster 2026-03-08T23:20:04.201052+0000 mgr.x (mgr.14150) 176 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:06.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:05 vm04 bash[19918]: cluster 2026-03-08T23:20:04.201052+0000 mgr.x (mgr.14150) 176 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:06.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:05 vm02 bash[17457]: cluster 2026-03-08T23:20:04.201052+0000 mgr.x (mgr.14150) 176 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:06.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:05 vm02 bash[17457]: cluster 2026-03-08T23:20:04.201052+0000 mgr.x (mgr.14150) 176 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:06.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:05 vm10 bash[20034]: cluster 2026-03-08T23:20:04.201052+0000 mgr.x (mgr.14150) 176 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:06.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:05 vm10 bash[20034]: cluster 2026-03-08T23:20:04.201052+0000 mgr.x (mgr.14150) 176 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:06.489 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:20:06.507 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph orch daemon add osd vm10:/dev/vde 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: cluster 2026-03-08T23:20:06.201307+0000 mgr.x (mgr.14150) 177 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: cluster 2026-03-08T23:20:06.201307+0000 mgr.x (mgr.14150) 177 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.724058+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.724058+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.728204+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.728204+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.729249+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.729249+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.729766+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.729766+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.730262+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.730262+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.733353+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.733353+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.735325+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.735325+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.736071+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.736071+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.740316+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:08 vm04 bash[19918]: audit 2026-03-08T23:20:07.740316+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: cluster 2026-03-08T23:20:06.201307+0000 mgr.x (mgr.14150) 177 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: cluster 2026-03-08T23:20:06.201307+0000 mgr.x (mgr.14150) 177 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.724058+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.724058+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.728204+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.728204+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.729249+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.729249+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.729766+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.729766+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.730262+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.730262+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.733353+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.733353+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.735325+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.735325+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.736071+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.736071+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.740316+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:08 vm02 bash[17457]: audit 2026-03-08T23:20:07.740316+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: cluster 2026-03-08T23:20:06.201307+0000 mgr.x (mgr.14150) 177 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:08.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: cluster 2026-03-08T23:20:06.201307+0000 mgr.x (mgr.14150) 177 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:08.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.724058+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.724058+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.728204+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.728204+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.729249+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.729249+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.729766+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.729766+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.730262+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.730262+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.733353+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.733353+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.735325+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.735325+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.736071+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.736071+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.740316+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:08.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:08 vm10 bash[20034]: audit 2026-03-08T23:20:07.740316+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:09.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:09 vm04 bash[19918]: cephadm 2026-03-08T23:20:07.718442+0000 mgr.x (mgr.14150) 178 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:20:09.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:09 vm04 bash[19918]: cephadm 2026-03-08T23:20:07.718442+0000 mgr.x (mgr.14150) 178 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:20:09.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:09 vm04 bash[19918]: cephadm 2026-03-08T23:20:07.730613+0000 mgr.x (mgr.14150) 179 : cephadm [INF] Adjusting osd_memory_target on vm04 to 1517M 2026-03-08T23:20:09.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:09 vm04 bash[19918]: cephadm 2026-03-08T23:20:07.730613+0000 mgr.x (mgr.14150) 179 : cephadm [INF] Adjusting osd_memory_target on vm04 to 1517M 2026-03-08T23:20:09.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:09 vm02 bash[17457]: cephadm 2026-03-08T23:20:07.718442+0000 mgr.x (mgr.14150) 178 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:20:09.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:09 vm02 bash[17457]: cephadm 2026-03-08T23:20:07.718442+0000 mgr.x (mgr.14150) 178 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:20:09.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:09 vm02 bash[17457]: cephadm 2026-03-08T23:20:07.730613+0000 mgr.x (mgr.14150) 179 : cephadm [INF] Adjusting osd_memory_target on vm04 to 1517M 2026-03-08T23:20:09.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:09 vm02 bash[17457]: cephadm 2026-03-08T23:20:07.730613+0000 mgr.x (mgr.14150) 179 : cephadm [INF] Adjusting osd_memory_target on vm04 to 1517M 2026-03-08T23:20:09.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:09 vm10 bash[20034]: cephadm 2026-03-08T23:20:07.718442+0000 mgr.x (mgr.14150) 178 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:20:09.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:09 vm10 bash[20034]: cephadm 2026-03-08T23:20:07.718442+0000 mgr.x (mgr.14150) 178 : cephadm [INF] Detected new or changed devices on vm04 2026-03-08T23:20:09.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:09 vm10 bash[20034]: cephadm 2026-03-08T23:20:07.730613+0000 mgr.x (mgr.14150) 179 : cephadm [INF] Adjusting osd_memory_target on vm04 to 1517M 2026-03-08T23:20:09.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:09 vm10 bash[20034]: cephadm 2026-03-08T23:20:07.730613+0000 mgr.x (mgr.14150) 179 : cephadm [INF] Adjusting osd_memory_target on vm04 to 1517M 2026-03-08T23:20:10.158 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.c/config 2026-03-08T23:20:10.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:10 vm04 bash[19918]: cluster 2026-03-08T23:20:08.201532+0000 mgr.x (mgr.14150) 180 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:10.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:10 vm04 bash[19918]: cluster 2026-03-08T23:20:08.201532+0000 mgr.x (mgr.14150) 180 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:10.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:10 vm02 bash[17457]: cluster 2026-03-08T23:20:08.201532+0000 mgr.x (mgr.14150) 180 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:10.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:10 vm02 bash[17457]: cluster 2026-03-08T23:20:08.201532+0000 mgr.x (mgr.14150) 180 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:10.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:10 vm10 bash[20034]: cluster 2026-03-08T23:20:08.201532+0000 mgr.x (mgr.14150) 180 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:10.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:10 vm10 bash[20034]: cluster 2026-03-08T23:20:08.201532+0000 mgr.x (mgr.14150) 180 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:11.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:11 vm04 bash[19918]: audit 2026-03-08T23:20:10.400162+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:20:11.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:11 vm04 bash[19918]: audit 2026-03-08T23:20:10.400162+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:20:11.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:11 vm04 bash[19918]: audit 2026-03-08T23:20:10.401283+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:20:11.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:11 vm04 bash[19918]: audit 2026-03-08T23:20:10.401283+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:20:11.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:11 vm04 bash[19918]: audit 2026-03-08T23:20:10.401637+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:11.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:11 vm04 bash[19918]: audit 2026-03-08T23:20:10.401637+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:11 vm02 bash[17457]: audit 2026-03-08T23:20:10.400162+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:20:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:11 vm02 bash[17457]: audit 2026-03-08T23:20:10.400162+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:20:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:11 vm02 bash[17457]: audit 2026-03-08T23:20:10.401283+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:20:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:11 vm02 bash[17457]: audit 2026-03-08T23:20:10.401283+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:20:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:11 vm02 bash[17457]: audit 2026-03-08T23:20:10.401637+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:11 vm02 bash[17457]: audit 2026-03-08T23:20:10.401637+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:11 vm10 bash[20034]: audit 2026-03-08T23:20:10.400162+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:20:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:11 vm10 bash[20034]: audit 2026-03-08T23:20:10.400162+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:20:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:11 vm10 bash[20034]: audit 2026-03-08T23:20:10.401283+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:20:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:11 vm10 bash[20034]: audit 2026-03-08T23:20:10.401283+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:20:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:11 vm10 bash[20034]: audit 2026-03-08T23:20:10.401637+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:11 vm10 bash[20034]: audit 2026-03-08T23:20:10.401637+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:12.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:12 vm04 bash[19918]: cluster 2026-03-08T23:20:10.201758+0000 mgr.x (mgr.14150) 181 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:12.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:12 vm04 bash[19918]: cluster 2026-03-08T23:20:10.201758+0000 mgr.x (mgr.14150) 181 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:12.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:12 vm04 bash[19918]: audit 2026-03-08T23:20:10.398403+0000 mgr.x (mgr.14150) 182 : audit [DBG] from='client.24214 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:20:12.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:12 vm04 bash[19918]: audit 2026-03-08T23:20:10.398403+0000 mgr.x (mgr.14150) 182 : audit [DBG] from='client.24214 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:20:12.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:12 vm02 bash[17457]: cluster 2026-03-08T23:20:10.201758+0000 mgr.x (mgr.14150) 181 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:12.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:12 vm02 bash[17457]: cluster 2026-03-08T23:20:10.201758+0000 mgr.x (mgr.14150) 181 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:12.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:12 vm02 bash[17457]: audit 2026-03-08T23:20:10.398403+0000 mgr.x (mgr.14150) 182 : audit [DBG] from='client.24214 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:20:12.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:12 vm02 bash[17457]: audit 2026-03-08T23:20:10.398403+0000 mgr.x (mgr.14150) 182 : audit [DBG] from='client.24214 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:20:12.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:12 vm10 bash[20034]: cluster 2026-03-08T23:20:10.201758+0000 mgr.x (mgr.14150) 181 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:12.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:12 vm10 bash[20034]: cluster 2026-03-08T23:20:10.201758+0000 mgr.x (mgr.14150) 181 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:12.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:12 vm10 bash[20034]: audit 2026-03-08T23:20:10.398403+0000 mgr.x (mgr.14150) 182 : audit [DBG] from='client.24214 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:20:12.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:12 vm10 bash[20034]: audit 2026-03-08T23:20:10.398403+0000 mgr.x (mgr.14150) 182 : audit [DBG] from='client.24214 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:20:14.298 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:14 vm10 bash[20034]: cluster 2026-03-08T23:20:12.202082+0000 mgr.x (mgr.14150) 183 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:14.298 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:14 vm10 bash[20034]: cluster 2026-03-08T23:20:12.202082+0000 mgr.x (mgr.14150) 183 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:14.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:14 vm04 bash[19918]: cluster 2026-03-08T23:20:12.202082+0000 mgr.x (mgr.14150) 183 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:14.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:14 vm04 bash[19918]: cluster 2026-03-08T23:20:12.202082+0000 mgr.x (mgr.14150) 183 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:14.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:14 vm02 bash[17457]: cluster 2026-03-08T23:20:12.202082+0000 mgr.x (mgr.14150) 183 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:14.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:14 vm02 bash[17457]: cluster 2026-03-08T23:20:12.202082+0000 mgr.x (mgr.14150) 183 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:15.324 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:15 vm10 bash[20034]: audit 2026-03-08T23:20:14.876199+0000 mon.c (mon.1) 7 : audit [INF] from='client.? 192.168.123.110:0/2127899695' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]: dispatch 2026-03-08T23:20:15.324 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:15 vm10 bash[20034]: audit 2026-03-08T23:20:14.876199+0000 mon.c (mon.1) 7 : audit [INF] from='client.? 192.168.123.110:0/2127899695' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]: dispatch 2026-03-08T23:20:15.324 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:15 vm10 bash[20034]: audit 2026-03-08T23:20:14.877008+0000 mon.a (mon.0) 517 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]: dispatch 2026-03-08T23:20:15.324 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:15 vm10 bash[20034]: audit 2026-03-08T23:20:14.877008+0000 mon.a (mon.0) 517 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]: dispatch 2026-03-08T23:20:15.324 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:15 vm10 bash[20034]: audit 2026-03-08T23:20:14.880617+0000 mon.a (mon.0) 518 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]': finished 2026-03-08T23:20:15.324 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:15 vm10 bash[20034]: audit 2026-03-08T23:20:14.880617+0000 mon.a (mon.0) 518 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]': finished 2026-03-08T23:20:15.324 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:15 vm10 bash[20034]: cluster 2026-03-08T23:20:14.883427+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-08T23:20:15.324 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:15 vm10 bash[20034]: cluster 2026-03-08T23:20:14.883427+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-08T23:20:15.324 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:15 vm10 bash[20034]: audit 2026-03-08T23:20:14.883710+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:15.324 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:15 vm10 bash[20034]: audit 2026-03-08T23:20:14.883710+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:15.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:15 vm04 bash[19918]: audit 2026-03-08T23:20:14.876199+0000 mon.c (mon.1) 7 : audit [INF] from='client.? 192.168.123.110:0/2127899695' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]: dispatch 2026-03-08T23:20:15.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:15 vm04 bash[19918]: audit 2026-03-08T23:20:14.876199+0000 mon.c (mon.1) 7 : audit [INF] from='client.? 192.168.123.110:0/2127899695' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]: dispatch 2026-03-08T23:20:15.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:15 vm04 bash[19918]: audit 2026-03-08T23:20:14.877008+0000 mon.a (mon.0) 517 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]: dispatch 2026-03-08T23:20:15.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:15 vm04 bash[19918]: audit 2026-03-08T23:20:14.877008+0000 mon.a (mon.0) 517 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]: dispatch 2026-03-08T23:20:15.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:15 vm04 bash[19918]: audit 2026-03-08T23:20:14.880617+0000 mon.a (mon.0) 518 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]': finished 2026-03-08T23:20:15.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:15 vm04 bash[19918]: audit 2026-03-08T23:20:14.880617+0000 mon.a (mon.0) 518 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]': finished 2026-03-08T23:20:15.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:15 vm04 bash[19918]: cluster 2026-03-08T23:20:14.883427+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-08T23:20:15.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:15 vm04 bash[19918]: cluster 2026-03-08T23:20:14.883427+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-08T23:20:15.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:15 vm04 bash[19918]: audit 2026-03-08T23:20:14.883710+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:15.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:15 vm04 bash[19918]: audit 2026-03-08T23:20:14.883710+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:15.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:15 vm02 bash[17457]: audit 2026-03-08T23:20:14.876199+0000 mon.c (mon.1) 7 : audit [INF] from='client.? 192.168.123.110:0/2127899695' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]: dispatch 2026-03-08T23:20:15.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:15 vm02 bash[17457]: audit 2026-03-08T23:20:14.876199+0000 mon.c (mon.1) 7 : audit [INF] from='client.? 192.168.123.110:0/2127899695' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]: dispatch 2026-03-08T23:20:15.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:15 vm02 bash[17457]: audit 2026-03-08T23:20:14.877008+0000 mon.a (mon.0) 517 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]: dispatch 2026-03-08T23:20:15.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:15 vm02 bash[17457]: audit 2026-03-08T23:20:14.877008+0000 mon.a (mon.0) 517 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]: dispatch 2026-03-08T23:20:15.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:15 vm02 bash[17457]: audit 2026-03-08T23:20:14.880617+0000 mon.a (mon.0) 518 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]': finished 2026-03-08T23:20:15.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:15 vm02 bash[17457]: audit 2026-03-08T23:20:14.880617+0000 mon.a (mon.0) 518 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b6909095-51a9-4b9d-95f5-1d9f04559ea1"}]': finished 2026-03-08T23:20:15.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:15 vm02 bash[17457]: cluster 2026-03-08T23:20:14.883427+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-08T23:20:15.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:15 vm02 bash[17457]: cluster 2026-03-08T23:20:14.883427+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-08T23:20:15.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:15 vm02 bash[17457]: audit 2026-03-08T23:20:14.883710+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:15.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:15 vm02 bash[17457]: audit 2026-03-08T23:20:14.883710+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:16.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:16 vm04 bash[19918]: cluster 2026-03-08T23:20:14.202365+0000 mgr.x (mgr.14150) 184 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:16.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:16 vm04 bash[19918]: cluster 2026-03-08T23:20:14.202365+0000 mgr.x (mgr.14150) 184 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:16.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:16 vm04 bash[19918]: audit 2026-03-08T23:20:15.498238+0000 mon.a (mon.0) 521 : audit [DBG] from='client.? 192.168.123.110:0/1498466814' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:20:16.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:16 vm04 bash[19918]: audit 2026-03-08T23:20:15.498238+0000 mon.a (mon.0) 521 : audit [DBG] from='client.? 192.168.123.110:0/1498466814' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:20:16.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:16 vm02 bash[17457]: cluster 2026-03-08T23:20:14.202365+0000 mgr.x (mgr.14150) 184 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:16.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:16 vm02 bash[17457]: cluster 2026-03-08T23:20:14.202365+0000 mgr.x (mgr.14150) 184 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:16.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:16 vm02 bash[17457]: audit 2026-03-08T23:20:15.498238+0000 mon.a (mon.0) 521 : audit [DBG] from='client.? 192.168.123.110:0/1498466814' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:20:16.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:16 vm02 bash[17457]: audit 2026-03-08T23:20:15.498238+0000 mon.a (mon.0) 521 : audit [DBG] from='client.? 192.168.123.110:0/1498466814' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:20:16.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:16 vm10 bash[20034]: cluster 2026-03-08T23:20:14.202365+0000 mgr.x (mgr.14150) 184 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:16.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:16 vm10 bash[20034]: cluster 2026-03-08T23:20:14.202365+0000 mgr.x (mgr.14150) 184 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:16.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:16 vm10 bash[20034]: audit 2026-03-08T23:20:15.498238+0000 mon.a (mon.0) 521 : audit [DBG] from='client.? 192.168.123.110:0/1498466814' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:20:16.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:16 vm10 bash[20034]: audit 2026-03-08T23:20:15.498238+0000 mon.a (mon.0) 521 : audit [DBG] from='client.? 192.168.123.110:0/1498466814' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:20:17.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:17 vm10 bash[20034]: cluster 2026-03-08T23:20:16.202615+0000 mgr.x (mgr.14150) 185 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:17.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:17 vm10 bash[20034]: cluster 2026-03-08T23:20:16.202615+0000 mgr.x (mgr.14150) 185 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:17.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:17 vm04 bash[19918]: cluster 2026-03-08T23:20:16.202615+0000 mgr.x (mgr.14150) 185 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:17.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:17 vm04 bash[19918]: cluster 2026-03-08T23:20:16.202615+0000 mgr.x (mgr.14150) 185 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:17.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:17 vm02 bash[17457]: cluster 2026-03-08T23:20:16.202615+0000 mgr.x (mgr.14150) 185 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:17.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:17 vm02 bash[17457]: cluster 2026-03-08T23:20:16.202615+0000 mgr.x (mgr.14150) 185 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:19.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:19 vm10 bash[20034]: cluster 2026-03-08T23:20:18.202878+0000 mgr.x (mgr.14150) 186 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:19.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:19 vm10 bash[20034]: cluster 2026-03-08T23:20:18.202878+0000 mgr.x (mgr.14150) 186 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:19.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:19 vm04 bash[19918]: cluster 2026-03-08T23:20:18.202878+0000 mgr.x (mgr.14150) 186 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:19.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:19 vm04 bash[19918]: cluster 2026-03-08T23:20:18.202878+0000 mgr.x (mgr.14150) 186 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:19.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:19 vm02 bash[17457]: cluster 2026-03-08T23:20:18.202878+0000 mgr.x (mgr.14150) 186 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:19.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:19 vm02 bash[17457]: cluster 2026-03-08T23:20:18.202878+0000 mgr.x (mgr.14150) 186 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:21.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:21 vm04 bash[19918]: cluster 2026-03-08T23:20:20.203223+0000 mgr.x (mgr.14150) 187 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:21.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:21 vm04 bash[19918]: cluster 2026-03-08T23:20:20.203223+0000 mgr.x (mgr.14150) 187 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:21.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:21 vm02 bash[17457]: cluster 2026-03-08T23:20:20.203223+0000 mgr.x (mgr.14150) 187 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:21.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:21 vm02 bash[17457]: cluster 2026-03-08T23:20:20.203223+0000 mgr.x (mgr.14150) 187 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:21.909 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:21 vm10 bash[20034]: cluster 2026-03-08T23:20:20.203223+0000 mgr.x (mgr.14150) 187 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:21.909 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:21 vm10 bash[20034]: cluster 2026-03-08T23:20:20.203223+0000 mgr.x (mgr.14150) 187 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:23.622 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:23 vm10 bash[20034]: cluster 2026-03-08T23:20:22.203548+0000 mgr.x (mgr.14150) 188 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:23.622 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:23 vm10 bash[20034]: cluster 2026-03-08T23:20:22.203548+0000 mgr.x (mgr.14150) 188 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:23.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:23 vm04 bash[19918]: cluster 2026-03-08T23:20:22.203548+0000 mgr.x (mgr.14150) 188 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:23.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:23 vm04 bash[19918]: cluster 2026-03-08T23:20:22.203548+0000 mgr.x (mgr.14150) 188 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:23 vm02 bash[17457]: cluster 2026-03-08T23:20:22.203548+0000 mgr.x (mgr.14150) 188 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:23 vm02 bash[17457]: cluster 2026-03-08T23:20:22.203548+0000 mgr.x (mgr.14150) 188 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:24.474 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:24 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:20:24.474 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:24 vm10 bash[20034]: audit 2026-03-08T23:20:23.666111+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-08T23:20:24.474 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:24 vm10 bash[20034]: audit 2026-03-08T23:20:23.666111+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-08T23:20:24.474 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:24 vm10 bash[20034]: audit 2026-03-08T23:20:23.666618+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:24.474 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:24 vm10 bash[20034]: audit 2026-03-08T23:20:23.666618+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:24.475 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:24 vm10 bash[20034]: cephadm 2026-03-08T23:20:23.666980+0000 mgr.x (mgr.14150) 189 : cephadm [INF] Deploying daemon osd.5 on vm10 2026-03-08T23:20:24.475 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:24 vm10 bash[20034]: cephadm 2026-03-08T23:20:23.666980+0000 mgr.x (mgr.14150) 189 : cephadm [INF] Deploying daemon osd.5 on vm10 2026-03-08T23:20:24.761 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:24 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:20:24.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:24 vm04 bash[19918]: audit 2026-03-08T23:20:23.666111+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-08T23:20:24.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:24 vm04 bash[19918]: audit 2026-03-08T23:20:23.666111+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-08T23:20:24.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:24 vm04 bash[19918]: audit 2026-03-08T23:20:23.666618+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:24.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:24 vm04 bash[19918]: audit 2026-03-08T23:20:23.666618+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:24.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:24 vm04 bash[19918]: cephadm 2026-03-08T23:20:23.666980+0000 mgr.x (mgr.14150) 189 : cephadm [INF] Deploying daemon osd.5 on vm10 2026-03-08T23:20:24.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:24 vm04 bash[19918]: cephadm 2026-03-08T23:20:23.666980+0000 mgr.x (mgr.14150) 189 : cephadm [INF] Deploying daemon osd.5 on vm10 2026-03-08T23:20:24.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:24 vm02 bash[17457]: audit 2026-03-08T23:20:23.666111+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-08T23:20:24.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:24 vm02 bash[17457]: audit 2026-03-08T23:20:23.666111+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-08T23:20:24.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:24 vm02 bash[17457]: audit 2026-03-08T23:20:23.666618+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:24.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:24 vm02 bash[17457]: audit 2026-03-08T23:20:23.666618+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:24.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:24 vm02 bash[17457]: cephadm 2026-03-08T23:20:23.666980+0000 mgr.x (mgr.14150) 189 : cephadm [INF] Deploying daemon osd.5 on vm10 2026-03-08T23:20:24.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:24 vm02 bash[17457]: cephadm 2026-03-08T23:20:23.666980+0000 mgr.x (mgr.14150) 189 : cephadm [INF] Deploying daemon osd.5 on vm10 2026-03-08T23:20:25.741 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:25 vm10 bash[20034]: cluster 2026-03-08T23:20:24.203852+0000 mgr.x (mgr.14150) 190 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:25.741 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:25 vm10 bash[20034]: cluster 2026-03-08T23:20:24.203852+0000 mgr.x (mgr.14150) 190 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:25.741 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:25 vm10 bash[20034]: audit 2026-03-08T23:20:24.687397+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:25.741 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:25 vm10 bash[20034]: audit 2026-03-08T23:20:24.687397+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:25.741 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:25 vm10 bash[20034]: audit 2026-03-08T23:20:24.692632+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:25.741 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:25 vm10 bash[20034]: audit 2026-03-08T23:20:24.692632+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:25.741 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:25 vm10 bash[20034]: audit 2026-03-08T23:20:24.696236+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:25.741 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:25 vm10 bash[20034]: audit 2026-03-08T23:20:24.696236+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:25.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:25 vm04 bash[19918]: cluster 2026-03-08T23:20:24.203852+0000 mgr.x (mgr.14150) 190 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:25.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:25 vm04 bash[19918]: cluster 2026-03-08T23:20:24.203852+0000 mgr.x (mgr.14150) 190 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:25.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:25 vm04 bash[19918]: audit 2026-03-08T23:20:24.687397+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:25.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:25 vm04 bash[19918]: audit 2026-03-08T23:20:24.687397+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:25.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:25 vm04 bash[19918]: audit 2026-03-08T23:20:24.692632+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:25.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:25 vm04 bash[19918]: audit 2026-03-08T23:20:24.692632+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:25.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:25 vm04 bash[19918]: audit 2026-03-08T23:20:24.696236+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:25.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:25 vm04 bash[19918]: audit 2026-03-08T23:20:24.696236+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:25.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:25 vm02 bash[17457]: cluster 2026-03-08T23:20:24.203852+0000 mgr.x (mgr.14150) 190 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:25.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:25 vm02 bash[17457]: cluster 2026-03-08T23:20:24.203852+0000 mgr.x (mgr.14150) 190 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:25.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:25 vm02 bash[17457]: audit 2026-03-08T23:20:24.687397+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:25.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:25 vm02 bash[17457]: audit 2026-03-08T23:20:24.687397+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:25.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:25 vm02 bash[17457]: audit 2026-03-08T23:20:24.692632+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:25.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:25 vm02 bash[17457]: audit 2026-03-08T23:20:24.692632+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:25.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:25 vm02 bash[17457]: audit 2026-03-08T23:20:24.696236+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:25.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:25 vm02 bash[17457]: audit 2026-03-08T23:20:24.696236+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:27 vm10 bash[20034]: cluster 2026-03-08T23:20:26.204137+0000 mgr.x (mgr.14150) 191 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:27 vm10 bash[20034]: cluster 2026-03-08T23:20:26.204137+0000 mgr.x (mgr.14150) 191 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:27.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:27 vm04 bash[19918]: cluster 2026-03-08T23:20:26.204137+0000 mgr.x (mgr.14150) 191 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:27.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:27 vm04 bash[19918]: cluster 2026-03-08T23:20:26.204137+0000 mgr.x (mgr.14150) 191 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:27.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:27 vm02 bash[17457]: cluster 2026-03-08T23:20:26.204137+0000 mgr.x (mgr.14150) 191 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:27.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:27 vm02 bash[17457]: cluster 2026-03-08T23:20:26.204137+0000 mgr.x (mgr.14150) 191 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:29 vm04 bash[19918]: cluster 2026-03-08T23:20:28.204436+0000 mgr.x (mgr.14150) 192 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:29 vm04 bash[19918]: cluster 2026-03-08T23:20:28.204436+0000 mgr.x (mgr.14150) 192 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:29 vm04 bash[19918]: audit 2026-03-08T23:20:28.465622+0000 mon.c (mon.1) 8 : audit [INF] from='osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:20:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:29 vm04 bash[19918]: audit 2026-03-08T23:20:28.465622+0000 mon.c (mon.1) 8 : audit [INF] from='osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:20:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:29 vm04 bash[19918]: audit 2026-03-08T23:20:28.466215+0000 mon.a (mon.0) 527 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:20:29.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:29 vm04 bash[19918]: audit 2026-03-08T23:20:28.466215+0000 mon.a (mon.0) 527 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:20:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:29 vm02 bash[17457]: cluster 2026-03-08T23:20:28.204436+0000 mgr.x (mgr.14150) 192 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:29 vm02 bash[17457]: cluster 2026-03-08T23:20:28.204436+0000 mgr.x (mgr.14150) 192 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:29 vm02 bash[17457]: audit 2026-03-08T23:20:28.465622+0000 mon.c (mon.1) 8 : audit [INF] from='osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:20:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:29 vm02 bash[17457]: audit 2026-03-08T23:20:28.465622+0000 mon.c (mon.1) 8 : audit [INF] from='osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:20:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:29 vm02 bash[17457]: audit 2026-03-08T23:20:28.466215+0000 mon.a (mon.0) 527 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:20:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:29 vm02 bash[17457]: audit 2026-03-08T23:20:28.466215+0000 mon.a (mon.0) 527 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:20:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:29 vm10 bash[20034]: cluster 2026-03-08T23:20:28.204436+0000 mgr.x (mgr.14150) 192 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:29 vm10 bash[20034]: cluster 2026-03-08T23:20:28.204436+0000 mgr.x (mgr.14150) 192 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:29 vm10 bash[20034]: audit 2026-03-08T23:20:28.465622+0000 mon.c (mon.1) 8 : audit [INF] from='osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:20:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:29 vm10 bash[20034]: audit 2026-03-08T23:20:28.465622+0000 mon.c (mon.1) 8 : audit [INF] from='osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:20:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:29 vm10 bash[20034]: audit 2026-03-08T23:20:28.466215+0000 mon.a (mon.0) 527 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:20:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:29 vm10 bash[20034]: audit 2026-03-08T23:20:28.466215+0000 mon.a (mon.0) 527 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:20:30.746 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:30 vm10 bash[20034]: audit 2026-03-08T23:20:29.473843+0000 mon.a (mon.0) 528 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-08T23:20:30.747 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:30 vm10 bash[20034]: audit 2026-03-08T23:20:29.473843+0000 mon.a (mon.0) 528 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-08T23:20:30.747 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:30 vm10 bash[20034]: cluster 2026-03-08T23:20:29.475803+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-08T23:20:30.747 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:30 vm10 bash[20034]: cluster 2026-03-08T23:20:29.475803+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-08T23:20:30.747 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:30 vm10 bash[20034]: audit 2026-03-08T23:20:29.475946+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:30.747 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:30 vm10 bash[20034]: audit 2026-03-08T23:20:29.475946+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:30.747 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:30 vm10 bash[20034]: audit 2026-03-08T23:20:29.482370+0000 mon.c (mon.1) 9 : audit [INF] from='osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:20:30.747 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:30 vm10 bash[20034]: audit 2026-03-08T23:20:29.482370+0000 mon.c (mon.1) 9 : audit [INF] from='osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:20:30.747 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:30 vm10 bash[20034]: audit 2026-03-08T23:20:29.482917+0000 mon.a (mon.0) 531 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:20:30.747 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:30 vm10 bash[20034]: audit 2026-03-08T23:20:29.482917+0000 mon.a (mon.0) 531 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:20:30.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:30 vm04 bash[19918]: audit 2026-03-08T23:20:29.473843+0000 mon.a (mon.0) 528 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-08T23:20:30.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:30 vm04 bash[19918]: audit 2026-03-08T23:20:29.473843+0000 mon.a (mon.0) 528 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-08T23:20:30.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:30 vm04 bash[19918]: cluster 2026-03-08T23:20:29.475803+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-08T23:20:30.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:30 vm04 bash[19918]: cluster 2026-03-08T23:20:29.475803+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-08T23:20:30.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:30 vm04 bash[19918]: audit 2026-03-08T23:20:29.475946+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:30.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:30 vm04 bash[19918]: audit 2026-03-08T23:20:29.475946+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:30.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:30 vm04 bash[19918]: audit 2026-03-08T23:20:29.482370+0000 mon.c (mon.1) 9 : audit [INF] from='osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:20:30.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:30 vm04 bash[19918]: audit 2026-03-08T23:20:29.482370+0000 mon.c (mon.1) 9 : audit [INF] from='osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:20:30.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:30 vm04 bash[19918]: audit 2026-03-08T23:20:29.482917+0000 mon.a (mon.0) 531 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:20:30.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:30 vm04 bash[19918]: audit 2026-03-08T23:20:29.482917+0000 mon.a (mon.0) 531 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:20:30.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:30 vm02 bash[17457]: audit 2026-03-08T23:20:29.473843+0000 mon.a (mon.0) 528 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-08T23:20:30.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:30 vm02 bash[17457]: audit 2026-03-08T23:20:29.473843+0000 mon.a (mon.0) 528 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-08T23:20:30.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:30 vm02 bash[17457]: cluster 2026-03-08T23:20:29.475803+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-08T23:20:30.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:30 vm02 bash[17457]: cluster 2026-03-08T23:20:29.475803+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-08T23:20:30.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:30 vm02 bash[17457]: audit 2026-03-08T23:20:29.475946+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:30.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:30 vm02 bash[17457]: audit 2026-03-08T23:20:29.475946+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:30.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:30 vm02 bash[17457]: audit 2026-03-08T23:20:29.482370+0000 mon.c (mon.1) 9 : audit [INF] from='osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:20:30.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:30 vm02 bash[17457]: audit 2026-03-08T23:20:29.482370+0000 mon.c (mon.1) 9 : audit [INF] from='osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:20:30.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:30 vm02 bash[17457]: audit 2026-03-08T23:20:29.482917+0000 mon.a (mon.0) 531 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:20:30.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:30 vm02 bash[17457]: audit 2026-03-08T23:20:29.482917+0000 mon.a (mon.0) 531 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: cluster 2026-03-08T23:20:29.491899+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: cluster 2026-03-08T23:20:29.491899+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: cluster 2026-03-08T23:20:29.491956+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: cluster 2026-03-08T23:20:29.491956+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: cluster 2026-03-08T23:20:30.204712+0000 mgr.x (mgr.14150) 193 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: cluster 2026-03-08T23:20:30.204712+0000 mgr.x (mgr.14150) 193 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: audit 2026-03-08T23:20:30.477065+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: audit 2026-03-08T23:20:30.477065+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: cluster 2026-03-08T23:20:30.479696+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: cluster 2026-03-08T23:20:30.479696+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: audit 2026-03-08T23:20:30.480482+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: audit 2026-03-08T23:20:30.480482+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: audit 2026-03-08T23:20:30.487671+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: audit 2026-03-08T23:20:30.487671+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: audit 2026-03-08T23:20:30.865501+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: audit 2026-03-08T23:20:30.865501+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: audit 2026-03-08T23:20:30.869624+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: audit 2026-03-08T23:20:30.869624+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: audit 2026-03-08T23:20:30.870219+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: audit 2026-03-08T23:20:30.870219+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: audit 2026-03-08T23:20:30.870810+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: audit 2026-03-08T23:20:30.870810+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: audit 2026-03-08T23:20:30.875570+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.755 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:31 vm10 bash[20034]: audit 2026-03-08T23:20:30.875570+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.811 INFO:teuthology.orchestra.run.vm10.stdout:Created osd(s) 5 on host 'vm10' 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: cluster 2026-03-08T23:20:29.491899+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: cluster 2026-03-08T23:20:29.491899+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: cluster 2026-03-08T23:20:29.491956+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: cluster 2026-03-08T23:20:29.491956+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: cluster 2026-03-08T23:20:30.204712+0000 mgr.x (mgr.14150) 193 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: cluster 2026-03-08T23:20:30.204712+0000 mgr.x (mgr.14150) 193 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: audit 2026-03-08T23:20:30.477065+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: audit 2026-03-08T23:20:30.477065+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: cluster 2026-03-08T23:20:30.479696+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: cluster 2026-03-08T23:20:30.479696+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: audit 2026-03-08T23:20:30.480482+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: audit 2026-03-08T23:20:30.480482+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: audit 2026-03-08T23:20:30.487671+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: audit 2026-03-08T23:20:30.487671+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: audit 2026-03-08T23:20:30.865501+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: audit 2026-03-08T23:20:30.865501+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: audit 2026-03-08T23:20:30.869624+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: audit 2026-03-08T23:20:30.869624+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: audit 2026-03-08T23:20:30.870219+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: audit 2026-03-08T23:20:30.870219+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: audit 2026-03-08T23:20:30.870810+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: audit 2026-03-08T23:20:30.870810+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: audit 2026-03-08T23:20:30.875570+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:31 vm04 bash[19918]: audit 2026-03-08T23:20:30.875570+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.890 DEBUG:teuthology.orchestra.run.vm10:osd.5> sudo journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.5.service 2026-03-08T23:20:31.890 INFO:tasks.cephadm:Deploying osd.6 on vm10 with /dev/vdd... 2026-03-08T23:20:31.890 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- lvm zap /dev/vdd 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: cluster 2026-03-08T23:20:29.491899+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: cluster 2026-03-08T23:20:29.491899+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: cluster 2026-03-08T23:20:29.491956+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: cluster 2026-03-08T23:20:29.491956+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: cluster 2026-03-08T23:20:30.204712+0000 mgr.x (mgr.14150) 193 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: cluster 2026-03-08T23:20:30.204712+0000 mgr.x (mgr.14150) 193 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: audit 2026-03-08T23:20:30.477065+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: audit 2026-03-08T23:20:30.477065+0000 mon.a (mon.0) 532 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: cluster 2026-03-08T23:20:30.479696+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: cluster 2026-03-08T23:20:30.479696+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: audit 2026-03-08T23:20:30.480482+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: audit 2026-03-08T23:20:30.480482+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: audit 2026-03-08T23:20:30.487671+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: audit 2026-03-08T23:20:30.487671+0000 mon.a (mon.0) 535 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: audit 2026-03-08T23:20:30.865501+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: audit 2026-03-08T23:20:30.865501+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: audit 2026-03-08T23:20:30.869624+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: audit 2026-03-08T23:20:30.869624+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: audit 2026-03-08T23:20:30.870219+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: audit 2026-03-08T23:20:30.870219+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: audit 2026-03-08T23:20:30.870810+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: audit 2026-03-08T23:20:30.870810+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: audit 2026-03-08T23:20:30.875570+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:31 vm02 bash[17457]: audit 2026-03-08T23:20:30.875570+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:32.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:32 vm04 bash[19918]: audit 2026-03-08T23:20:31.486768+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:32.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:32 vm04 bash[19918]: audit 2026-03-08T23:20:31.486768+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:32.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:32 vm04 bash[19918]: cluster 2026-03-08T23:20:31.492610+0000 mon.a (mon.0) 542 : cluster [INF] osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155] boot 2026-03-08T23:20:32.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:32 vm04 bash[19918]: cluster 2026-03-08T23:20:31.492610+0000 mon.a (mon.0) 542 : cluster [INF] osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155] boot 2026-03-08T23:20:32.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:32 vm04 bash[19918]: cluster 2026-03-08T23:20:31.492718+0000 mon.a (mon.0) 543 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-08T23:20:32.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:32 vm04 bash[19918]: cluster 2026-03-08T23:20:31.492718+0000 mon.a (mon.0) 543 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-08T23:20:32.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:32 vm04 bash[19918]: audit 2026-03-08T23:20:31.494231+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:32.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:32 vm04 bash[19918]: audit 2026-03-08T23:20:31.494231+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:32.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:32 vm04 bash[19918]: audit 2026-03-08T23:20:31.797942+0000 mon.a (mon.0) 545 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:32.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:32 vm04 bash[19918]: audit 2026-03-08T23:20:31.797942+0000 mon.a (mon.0) 545 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:32.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:32 vm04 bash[19918]: audit 2026-03-08T23:20:31.803372+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:32.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:32 vm04 bash[19918]: audit 2026-03-08T23:20:31.803372+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:32.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:32 vm04 bash[19918]: audit 2026-03-08T23:20:31.807652+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:32.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:32 vm04 bash[19918]: audit 2026-03-08T23:20:31.807652+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:32.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:32 vm02 bash[17457]: audit 2026-03-08T23:20:31.486768+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:32.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:32 vm02 bash[17457]: audit 2026-03-08T23:20:31.486768+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:32.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:32 vm02 bash[17457]: cluster 2026-03-08T23:20:31.492610+0000 mon.a (mon.0) 542 : cluster [INF] osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155] boot 2026-03-08T23:20:32.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:32 vm02 bash[17457]: cluster 2026-03-08T23:20:31.492610+0000 mon.a (mon.0) 542 : cluster [INF] osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155] boot 2026-03-08T23:20:32.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:32 vm02 bash[17457]: cluster 2026-03-08T23:20:31.492718+0000 mon.a (mon.0) 543 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-08T23:20:32.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:32 vm02 bash[17457]: cluster 2026-03-08T23:20:31.492718+0000 mon.a (mon.0) 543 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-08T23:20:32.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:32 vm02 bash[17457]: audit 2026-03-08T23:20:31.494231+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:32.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:32 vm02 bash[17457]: audit 2026-03-08T23:20:31.494231+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:32.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:32 vm02 bash[17457]: audit 2026-03-08T23:20:31.797942+0000 mon.a (mon.0) 545 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:32.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:32 vm02 bash[17457]: audit 2026-03-08T23:20:31.797942+0000 mon.a (mon.0) 545 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:32.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:32 vm02 bash[17457]: audit 2026-03-08T23:20:31.803372+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:32.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:32 vm02 bash[17457]: audit 2026-03-08T23:20:31.803372+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:32.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:32 vm02 bash[17457]: audit 2026-03-08T23:20:31.807652+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:32.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:32 vm02 bash[17457]: audit 2026-03-08T23:20:31.807652+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:32.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:32 vm10 bash[20034]: audit 2026-03-08T23:20:31.486768+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:32.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:32 vm10 bash[20034]: audit 2026-03-08T23:20:31.486768+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:32.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:32 vm10 bash[20034]: cluster 2026-03-08T23:20:31.492610+0000 mon.a (mon.0) 542 : cluster [INF] osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155] boot 2026-03-08T23:20:32.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:32 vm10 bash[20034]: cluster 2026-03-08T23:20:31.492610+0000 mon.a (mon.0) 542 : cluster [INF] osd.5 [v2:192.168.123.110:6800/3075842155,v1:192.168.123.110:6801/3075842155] boot 2026-03-08T23:20:32.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:32 vm10 bash[20034]: cluster 2026-03-08T23:20:31.492718+0000 mon.a (mon.0) 543 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-08T23:20:32.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:32 vm10 bash[20034]: cluster 2026-03-08T23:20:31.492718+0000 mon.a (mon.0) 543 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-08T23:20:32.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:32 vm10 bash[20034]: audit 2026-03-08T23:20:31.494231+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:32.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:32 vm10 bash[20034]: audit 2026-03-08T23:20:31.494231+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:20:32.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:32 vm10 bash[20034]: audit 2026-03-08T23:20:31.797942+0000 mon.a (mon.0) 545 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:32.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:32 vm10 bash[20034]: audit 2026-03-08T23:20:31.797942+0000 mon.a (mon.0) 545 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:32.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:32 vm10 bash[20034]: audit 2026-03-08T23:20:31.803372+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:32.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:32 vm10 bash[20034]: audit 2026-03-08T23:20:31.803372+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:32.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:32 vm10 bash[20034]: audit 2026-03-08T23:20:31.807652+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:32.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:32 vm10 bash[20034]: audit 2026-03-08T23:20:31.807652+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:33.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:33 vm04 bash[19918]: cluster 2026-03-08T23:20:32.204952+0000 mgr.x (mgr.14150) 194 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:33.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:33 vm04 bash[19918]: cluster 2026-03-08T23:20:32.204952+0000 mgr.x (mgr.14150) 194 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:33.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:33 vm04 bash[19918]: cluster 2026-03-08T23:20:32.508272+0000 mon.a (mon.0) 548 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-08T23:20:33.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:33 vm04 bash[19918]: cluster 2026-03-08T23:20:32.508272+0000 mon.a (mon.0) 548 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-08T23:20:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:33 vm02 bash[17457]: cluster 2026-03-08T23:20:32.204952+0000 mgr.x (mgr.14150) 194 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:33 vm02 bash[17457]: cluster 2026-03-08T23:20:32.204952+0000 mgr.x (mgr.14150) 194 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:33 vm02 bash[17457]: cluster 2026-03-08T23:20:32.508272+0000 mon.a (mon.0) 548 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-08T23:20:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:33 vm02 bash[17457]: cluster 2026-03-08T23:20:32.508272+0000 mon.a (mon.0) 548 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-08T23:20:33.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:33 vm10 bash[20034]: cluster 2026-03-08T23:20:32.204952+0000 mgr.x (mgr.14150) 194 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:33.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:33 vm10 bash[20034]: cluster 2026-03-08T23:20:32.204952+0000 mgr.x (mgr.14150) 194 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:20:33.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:33 vm10 bash[20034]: cluster 2026-03-08T23:20:32.508272+0000 mon.a (mon.0) 548 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-08T23:20:33.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:33 vm10 bash[20034]: cluster 2026-03-08T23:20:32.508272+0000 mon.a (mon.0) 548 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-08T23:20:34.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:34 vm04 bash[19918]: cluster 2026-03-08T23:20:33.518615+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-08T23:20:34.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:34 vm04 bash[19918]: cluster 2026-03-08T23:20:33.518615+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-08T23:20:34.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:34 vm02 bash[17457]: cluster 2026-03-08T23:20:33.518615+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-08T23:20:34.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:34 vm02 bash[17457]: cluster 2026-03-08T23:20:33.518615+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-08T23:20:34.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:34 vm10 bash[20034]: cluster 2026-03-08T23:20:33.518615+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-08T23:20:34.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:34 vm10 bash[20034]: cluster 2026-03-08T23:20:33.518615+0000 mon.a (mon.0) 549 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-08T23:20:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:35 vm04 bash[19918]: cluster 2026-03-08T23:20:34.205202+0000 mgr.x (mgr.14150) 195 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:35 vm04 bash[19918]: cluster 2026-03-08T23:20:34.205202+0000 mgr.x (mgr.14150) 195 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:35 vm02 bash[17457]: cluster 2026-03-08T23:20:34.205202+0000 mgr.x (mgr.14150) 195 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:35 vm02 bash[17457]: cluster 2026-03-08T23:20:34.205202+0000 mgr.x (mgr.14150) 195 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:35 vm10 bash[20034]: cluster 2026-03-08T23:20:34.205202+0000 mgr.x (mgr.14150) 195 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:35 vm10 bash[20034]: cluster 2026-03-08T23:20:34.205202+0000 mgr.x (mgr.14150) 195 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:36.549 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.c/config 2026-03-08T23:20:37.560 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:37 vm10 bash[20034]: cluster 2026-03-08T23:20:36.205502+0000 mgr.x (mgr.14150) 196 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 78 KiB/s, 0 objects/s recovering 2026-03-08T23:20:37.560 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:37 vm10 bash[20034]: cluster 2026-03-08T23:20:36.205502+0000 mgr.x (mgr.14150) 196 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 78 KiB/s, 0 objects/s recovering 2026-03-08T23:20:37.560 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:37 vm10 bash[20034]: audit 2026-03-08T23:20:37.340072+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.560 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:37 vm10 bash[20034]: audit 2026-03-08T23:20:37.340072+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.560 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:37 vm10 bash[20034]: audit 2026-03-08T23:20:37.343930+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.560 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:37 vm10 bash[20034]: audit 2026-03-08T23:20:37.343930+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.560 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:37 vm10 bash[20034]: audit 2026-03-08T23:20:37.344631+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:37.560 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:37 vm10 bash[20034]: audit 2026-03-08T23:20:37.344631+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:37.560 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:37 vm10 bash[20034]: audit 2026-03-08T23:20:37.347552+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.560 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:37 vm10 bash[20034]: audit 2026-03-08T23:20:37.347552+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.560 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:37 vm10 bash[20034]: audit 2026-03-08T23:20:37.348816+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:37.560 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:37 vm10 bash[20034]: audit 2026-03-08T23:20:37.348816+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:37.560 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:37 vm10 bash[20034]: audit 2026-03-08T23:20:37.349282+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:37.560 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:37 vm10 bash[20034]: audit 2026-03-08T23:20:37.349282+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:37.560 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:37 vm10 bash[20034]: audit 2026-03-08T23:20:37.352811+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.560 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:37 vm10 bash[20034]: audit 2026-03-08T23:20:37.352811+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:37 vm04 bash[19918]: cluster 2026-03-08T23:20:36.205502+0000 mgr.x (mgr.14150) 196 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 78 KiB/s, 0 objects/s recovering 2026-03-08T23:20:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:37 vm04 bash[19918]: cluster 2026-03-08T23:20:36.205502+0000 mgr.x (mgr.14150) 196 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 78 KiB/s, 0 objects/s recovering 2026-03-08T23:20:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:37 vm04 bash[19918]: audit 2026-03-08T23:20:37.340072+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:37 vm04 bash[19918]: audit 2026-03-08T23:20:37.340072+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:37 vm04 bash[19918]: audit 2026-03-08T23:20:37.343930+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:37 vm04 bash[19918]: audit 2026-03-08T23:20:37.343930+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:37 vm04 bash[19918]: audit 2026-03-08T23:20:37.344631+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:37 vm04 bash[19918]: audit 2026-03-08T23:20:37.344631+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:37 vm04 bash[19918]: audit 2026-03-08T23:20:37.347552+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:37 vm04 bash[19918]: audit 2026-03-08T23:20:37.347552+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:37 vm04 bash[19918]: audit 2026-03-08T23:20:37.348816+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:37 vm04 bash[19918]: audit 2026-03-08T23:20:37.348816+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:37 vm04 bash[19918]: audit 2026-03-08T23:20:37.349282+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:37 vm04 bash[19918]: audit 2026-03-08T23:20:37.349282+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:37 vm04 bash[19918]: audit 2026-03-08T23:20:37.352811+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:37 vm04 bash[19918]: audit 2026-03-08T23:20:37.352811+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:37 vm02 bash[17457]: cluster 2026-03-08T23:20:36.205502+0000 mgr.x (mgr.14150) 196 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 78 KiB/s, 0 objects/s recovering 2026-03-08T23:20:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:37 vm02 bash[17457]: cluster 2026-03-08T23:20:36.205502+0000 mgr.x (mgr.14150) 196 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 78 KiB/s, 0 objects/s recovering 2026-03-08T23:20:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:37 vm02 bash[17457]: audit 2026-03-08T23:20:37.340072+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:37 vm02 bash[17457]: audit 2026-03-08T23:20:37.340072+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:37 vm02 bash[17457]: audit 2026-03-08T23:20:37.343930+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:37 vm02 bash[17457]: audit 2026-03-08T23:20:37.343930+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:37 vm02 bash[17457]: audit 2026-03-08T23:20:37.344631+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:37 vm02 bash[17457]: audit 2026-03-08T23:20:37.344631+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:20:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:37 vm02 bash[17457]: audit 2026-03-08T23:20:37.347552+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:37 vm02 bash[17457]: audit 2026-03-08T23:20:37.347552+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:37 vm02 bash[17457]: audit 2026-03-08T23:20:37.348816+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:37 vm02 bash[17457]: audit 2026-03-08T23:20:37.348816+0000 mon.a (mon.0) 554 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:37 vm02 bash[17457]: audit 2026-03-08T23:20:37.349282+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:37 vm02 bash[17457]: audit 2026-03-08T23:20:37.349282+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:20:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:37 vm02 bash[17457]: audit 2026-03-08T23:20:37.352811+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:37 vm02 bash[17457]: audit 2026-03-08T23:20:37.352811+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:38.066 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:20:38.084 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph orch daemon add osd vm10:/dev/vdd 2026-03-08T23:20:38.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:38 vm04 bash[19918]: cephadm 2026-03-08T23:20:37.334876+0000 mgr.x (mgr.14150) 197 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:20:38.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:38 vm04 bash[19918]: cephadm 2026-03-08T23:20:37.334876+0000 mgr.x (mgr.14150) 197 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:20:38.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:38 vm04 bash[19918]: cephadm 2026-03-08T23:20:37.345060+0000 mgr.x (mgr.14150) 198 : cephadm [INF] Adjusting osd_memory_target on vm10 to 4551M 2026-03-08T23:20:38.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:38 vm04 bash[19918]: cephadm 2026-03-08T23:20:37.345060+0000 mgr.x (mgr.14150) 198 : cephadm [INF] Adjusting osd_memory_target on vm10 to 4551M 2026-03-08T23:20:38.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:38 vm02 bash[17457]: cephadm 2026-03-08T23:20:37.334876+0000 mgr.x (mgr.14150) 197 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:20:38.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:38 vm02 bash[17457]: cephadm 2026-03-08T23:20:37.334876+0000 mgr.x (mgr.14150) 197 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:20:38.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:38 vm02 bash[17457]: cephadm 2026-03-08T23:20:37.345060+0000 mgr.x (mgr.14150) 198 : cephadm [INF] Adjusting osd_memory_target on vm10 to 4551M 2026-03-08T23:20:38.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:38 vm02 bash[17457]: cephadm 2026-03-08T23:20:37.345060+0000 mgr.x (mgr.14150) 198 : cephadm [INF] Adjusting osd_memory_target on vm10 to 4551M 2026-03-08T23:20:38.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:38 vm10 bash[20034]: cephadm 2026-03-08T23:20:37.334876+0000 mgr.x (mgr.14150) 197 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:20:38.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:38 vm10 bash[20034]: cephadm 2026-03-08T23:20:37.334876+0000 mgr.x (mgr.14150) 197 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:20:38.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:38 vm10 bash[20034]: cephadm 2026-03-08T23:20:37.345060+0000 mgr.x (mgr.14150) 198 : cephadm [INF] Adjusting osd_memory_target on vm10 to 4551M 2026-03-08T23:20:38.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:38 vm10 bash[20034]: cephadm 2026-03-08T23:20:37.345060+0000 mgr.x (mgr.14150) 198 : cephadm [INF] Adjusting osd_memory_target on vm10 to 4551M 2026-03-08T23:20:39.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:39 vm04 bash[19918]: cluster 2026-03-08T23:20:38.205754+0000 mgr.x (mgr.14150) 199 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s, 0 objects/s recovering 2026-03-08T23:20:39.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:39 vm04 bash[19918]: cluster 2026-03-08T23:20:38.205754+0000 mgr.x (mgr.14150) 199 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s, 0 objects/s recovering 2026-03-08T23:20:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:39 vm02 bash[17457]: cluster 2026-03-08T23:20:38.205754+0000 mgr.x (mgr.14150) 199 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s, 0 objects/s recovering 2026-03-08T23:20:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:39 vm02 bash[17457]: cluster 2026-03-08T23:20:38.205754+0000 mgr.x (mgr.14150) 199 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s, 0 objects/s recovering 2026-03-08T23:20:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:39 vm10 bash[20034]: cluster 2026-03-08T23:20:38.205754+0000 mgr.x (mgr.14150) 199 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s, 0 objects/s recovering 2026-03-08T23:20:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:39 vm10 bash[20034]: cluster 2026-03-08T23:20:38.205754+0000 mgr.x (mgr.14150) 199 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s, 0 objects/s recovering 2026-03-08T23:20:41.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:41 vm04 bash[19918]: cluster 2026-03-08T23:20:40.205981+0000 mgr.x (mgr.14150) 200 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:20:41.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:41 vm04 bash[19918]: cluster 2026-03-08T23:20:40.205981+0000 mgr.x (mgr.14150) 200 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:20:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:41 vm02 bash[17457]: cluster 2026-03-08T23:20:40.205981+0000 mgr.x (mgr.14150) 200 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:20:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:41 vm02 bash[17457]: cluster 2026-03-08T23:20:40.205981+0000 mgr.x (mgr.14150) 200 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:20:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:41 vm10 bash[20034]: cluster 2026-03-08T23:20:40.205981+0000 mgr.x (mgr.14150) 200 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:20:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:41 vm10 bash[20034]: cluster 2026-03-08T23:20:40.205981+0000 mgr.x (mgr.14150) 200 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:20:42.699 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.c/config 2026-03-08T23:20:43.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:43 vm04 bash[19918]: cluster 2026-03-08T23:20:42.206249+0000 mgr.x (mgr.14150) 201 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 46 KiB/s, 0 objects/s recovering 2026-03-08T23:20:43.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:43 vm04 bash[19918]: cluster 2026-03-08T23:20:42.206249+0000 mgr.x (mgr.14150) 201 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 46 KiB/s, 0 objects/s recovering 2026-03-08T23:20:43.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:43 vm04 bash[19918]: audit 2026-03-08T23:20:42.944705+0000 mgr.x (mgr.14150) 202 : audit [DBG] from='client.24241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:20:43.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:43 vm04 bash[19918]: audit 2026-03-08T23:20:42.944705+0000 mgr.x (mgr.14150) 202 : audit [DBG] from='client.24241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:20:43.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:43 vm04 bash[19918]: audit 2026-03-08T23:20:42.946362+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:20:43.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:43 vm04 bash[19918]: audit 2026-03-08T23:20:42.946362+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:20:43.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:43 vm04 bash[19918]: audit 2026-03-08T23:20:42.947643+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:20:43.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:43 vm04 bash[19918]: audit 2026-03-08T23:20:42.947643+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:20:43.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:43 vm04 bash[19918]: audit 2026-03-08T23:20:42.948035+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:43.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:43 vm04 bash[19918]: audit 2026-03-08T23:20:42.948035+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:43.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:43 vm02 bash[17457]: cluster 2026-03-08T23:20:42.206249+0000 mgr.x (mgr.14150) 201 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 46 KiB/s, 0 objects/s recovering 2026-03-08T23:20:43.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:43 vm02 bash[17457]: cluster 2026-03-08T23:20:42.206249+0000 mgr.x (mgr.14150) 201 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 46 KiB/s, 0 objects/s recovering 2026-03-08T23:20:43.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:43 vm02 bash[17457]: audit 2026-03-08T23:20:42.944705+0000 mgr.x (mgr.14150) 202 : audit [DBG] from='client.24241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:20:43.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:43 vm02 bash[17457]: audit 2026-03-08T23:20:42.944705+0000 mgr.x (mgr.14150) 202 : audit [DBG] from='client.24241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:20:43.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:43 vm02 bash[17457]: audit 2026-03-08T23:20:42.946362+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:20:43.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:43 vm02 bash[17457]: audit 2026-03-08T23:20:42.946362+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:20:43.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:43 vm02 bash[17457]: audit 2026-03-08T23:20:42.947643+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:20:43.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:43 vm02 bash[17457]: audit 2026-03-08T23:20:42.947643+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:20:43.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:43 vm02 bash[17457]: audit 2026-03-08T23:20:42.948035+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:43.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:43 vm02 bash[17457]: audit 2026-03-08T23:20:42.948035+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:43 vm10 bash[20034]: cluster 2026-03-08T23:20:42.206249+0000 mgr.x (mgr.14150) 201 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 46 KiB/s, 0 objects/s recovering 2026-03-08T23:20:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:43 vm10 bash[20034]: cluster 2026-03-08T23:20:42.206249+0000 mgr.x (mgr.14150) 201 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 46 KiB/s, 0 objects/s recovering 2026-03-08T23:20:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:43 vm10 bash[20034]: audit 2026-03-08T23:20:42.944705+0000 mgr.x (mgr.14150) 202 : audit [DBG] from='client.24241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:20:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:43 vm10 bash[20034]: audit 2026-03-08T23:20:42.944705+0000 mgr.x (mgr.14150) 202 : audit [DBG] from='client.24241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:20:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:43 vm10 bash[20034]: audit 2026-03-08T23:20:42.946362+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:20:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:43 vm10 bash[20034]: audit 2026-03-08T23:20:42.946362+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:20:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:43 vm10 bash[20034]: audit 2026-03-08T23:20:42.947643+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:20:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:43 vm10 bash[20034]: audit 2026-03-08T23:20:42.947643+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:20:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:43 vm10 bash[20034]: audit 2026-03-08T23:20:42.948035+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:43 vm10 bash[20034]: audit 2026-03-08T23:20:42.948035+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:45.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:45 vm04 bash[19918]: cluster 2026-03-08T23:20:44.206482+0000 mgr.x (mgr.14150) 203 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-08T23:20:45.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:45 vm04 bash[19918]: cluster 2026-03-08T23:20:44.206482+0000 mgr.x (mgr.14150) 203 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-08T23:20:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:45 vm02 bash[17457]: cluster 2026-03-08T23:20:44.206482+0000 mgr.x (mgr.14150) 203 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-08T23:20:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:45 vm02 bash[17457]: cluster 2026-03-08T23:20:44.206482+0000 mgr.x (mgr.14150) 203 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-08T23:20:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:45 vm10 bash[20034]: cluster 2026-03-08T23:20:44.206482+0000 mgr.x (mgr.14150) 203 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-08T23:20:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:45 vm10 bash[20034]: cluster 2026-03-08T23:20:44.206482+0000 mgr.x (mgr.14150) 203 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-08T23:20:47.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:47 vm04 bash[19918]: cluster 2026-03-08T23:20:46.206811+0000 mgr.x (mgr.14150) 204 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:20:47.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:47 vm04 bash[19918]: cluster 2026-03-08T23:20:46.206811+0000 mgr.x (mgr.14150) 204 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:20:47.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:47 vm04 bash[19918]: audit 2026-03-08T23:20:47.278924+0000 mon.c (mon.1) 10 : audit [INF] from='client.? 192.168.123.110:0/2627560939' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]: dispatch 2026-03-08T23:20:47.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:47 vm04 bash[19918]: audit 2026-03-08T23:20:47.278924+0000 mon.c (mon.1) 10 : audit [INF] from='client.? 192.168.123.110:0/2627560939' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]: dispatch 2026-03-08T23:20:47.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:47 vm04 bash[19918]: audit 2026-03-08T23:20:47.279615+0000 mon.a (mon.0) 560 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]: dispatch 2026-03-08T23:20:47.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:47 vm04 bash[19918]: audit 2026-03-08T23:20:47.279615+0000 mon.a (mon.0) 560 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]: dispatch 2026-03-08T23:20:47.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:47 vm04 bash[19918]: audit 2026-03-08T23:20:47.282641+0000 mon.a (mon.0) 561 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]': finished 2026-03-08T23:20:47.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:47 vm04 bash[19918]: audit 2026-03-08T23:20:47.282641+0000 mon.a (mon.0) 561 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]': finished 2026-03-08T23:20:47.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:47 vm04 bash[19918]: cluster 2026-03-08T23:20:47.285172+0000 mon.a (mon.0) 562 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-08T23:20:47.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:47 vm04 bash[19918]: cluster 2026-03-08T23:20:47.285172+0000 mon.a (mon.0) 562 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-08T23:20:47.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:47 vm04 bash[19918]: audit 2026-03-08T23:20:47.285599+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:20:47.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:47 vm04 bash[19918]: audit 2026-03-08T23:20:47.285599+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:20:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:47 vm02 bash[17457]: cluster 2026-03-08T23:20:46.206811+0000 mgr.x (mgr.14150) 204 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:20:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:47 vm02 bash[17457]: cluster 2026-03-08T23:20:46.206811+0000 mgr.x (mgr.14150) 204 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:20:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:47 vm02 bash[17457]: audit 2026-03-08T23:20:47.278924+0000 mon.c (mon.1) 10 : audit [INF] from='client.? 192.168.123.110:0/2627560939' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]: dispatch 2026-03-08T23:20:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:47 vm02 bash[17457]: audit 2026-03-08T23:20:47.278924+0000 mon.c (mon.1) 10 : audit [INF] from='client.? 192.168.123.110:0/2627560939' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]: dispatch 2026-03-08T23:20:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:47 vm02 bash[17457]: audit 2026-03-08T23:20:47.279615+0000 mon.a (mon.0) 560 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]: dispatch 2026-03-08T23:20:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:47 vm02 bash[17457]: audit 2026-03-08T23:20:47.279615+0000 mon.a (mon.0) 560 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]: dispatch 2026-03-08T23:20:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:47 vm02 bash[17457]: audit 2026-03-08T23:20:47.282641+0000 mon.a (mon.0) 561 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]': finished 2026-03-08T23:20:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:47 vm02 bash[17457]: audit 2026-03-08T23:20:47.282641+0000 mon.a (mon.0) 561 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]': finished 2026-03-08T23:20:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:47 vm02 bash[17457]: cluster 2026-03-08T23:20:47.285172+0000 mon.a (mon.0) 562 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-08T23:20:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:47 vm02 bash[17457]: cluster 2026-03-08T23:20:47.285172+0000 mon.a (mon.0) 562 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-08T23:20:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:47 vm02 bash[17457]: audit 2026-03-08T23:20:47.285599+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:20:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:47 vm02 bash[17457]: audit 2026-03-08T23:20:47.285599+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:20:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:47 vm10 bash[20034]: cluster 2026-03-08T23:20:46.206811+0000 mgr.x (mgr.14150) 204 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:20:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:47 vm10 bash[20034]: cluster 2026-03-08T23:20:46.206811+0000 mgr.x (mgr.14150) 204 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:20:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:47 vm10 bash[20034]: audit 2026-03-08T23:20:47.278924+0000 mon.c (mon.1) 10 : audit [INF] from='client.? 192.168.123.110:0/2627560939' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]: dispatch 2026-03-08T23:20:47.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:47 vm10 bash[20034]: audit 2026-03-08T23:20:47.278924+0000 mon.c (mon.1) 10 : audit [INF] from='client.? 192.168.123.110:0/2627560939' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]: dispatch 2026-03-08T23:20:47.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:47 vm10 bash[20034]: audit 2026-03-08T23:20:47.279615+0000 mon.a (mon.0) 560 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]: dispatch 2026-03-08T23:20:47.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:47 vm10 bash[20034]: audit 2026-03-08T23:20:47.279615+0000 mon.a (mon.0) 560 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]: dispatch 2026-03-08T23:20:47.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:47 vm10 bash[20034]: audit 2026-03-08T23:20:47.282641+0000 mon.a (mon.0) 561 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]': finished 2026-03-08T23:20:47.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:47 vm10 bash[20034]: audit 2026-03-08T23:20:47.282641+0000 mon.a (mon.0) 561 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "488a0919-fe60-4b1d-844d-b16c2182536e"}]': finished 2026-03-08T23:20:47.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:47 vm10 bash[20034]: cluster 2026-03-08T23:20:47.285172+0000 mon.a (mon.0) 562 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-08T23:20:47.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:47 vm10 bash[20034]: cluster 2026-03-08T23:20:47.285172+0000 mon.a (mon.0) 562 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-08T23:20:47.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:47 vm10 bash[20034]: audit 2026-03-08T23:20:47.285599+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:20:47.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:47 vm10 bash[20034]: audit 2026-03-08T23:20:47.285599+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:20:48.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:48 vm02 bash[17457]: audit 2026-03-08T23:20:47.900498+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.110:0/2123971421' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:20:48.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:48 vm02 bash[17457]: audit 2026-03-08T23:20:47.900498+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.110:0/2123971421' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:20:48.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:48 vm10 bash[20034]: audit 2026-03-08T23:20:47.900498+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.110:0/2123971421' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:20:48.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:48 vm10 bash[20034]: audit 2026-03-08T23:20:47.900498+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.110:0/2123971421' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:20:49.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:48 vm04 bash[19918]: audit 2026-03-08T23:20:47.900498+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.110:0/2123971421' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:20:49.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:48 vm04 bash[19918]: audit 2026-03-08T23:20:47.900498+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.110:0/2123971421' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:20:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:49 vm02 bash[17457]: cluster 2026-03-08T23:20:48.207061+0000 mgr.x (mgr.14150) 205 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:49 vm02 bash[17457]: cluster 2026-03-08T23:20:48.207061+0000 mgr.x (mgr.14150) 205 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:49 vm10 bash[20034]: cluster 2026-03-08T23:20:48.207061+0000 mgr.x (mgr.14150) 205 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:49 vm10 bash[20034]: cluster 2026-03-08T23:20:48.207061+0000 mgr.x (mgr.14150) 205 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:50.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:49 vm04 bash[19918]: cluster 2026-03-08T23:20:48.207061+0000 mgr.x (mgr.14150) 205 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:50.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:49 vm04 bash[19918]: cluster 2026-03-08T23:20:48.207061+0000 mgr.x (mgr.14150) 205 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:51.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:51 vm02 bash[17457]: cluster 2026-03-08T23:20:50.207284+0000 mgr.x (mgr.14150) 206 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:51.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:51 vm02 bash[17457]: cluster 2026-03-08T23:20:50.207284+0000 mgr.x (mgr.14150) 206 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:51.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:51 vm10 bash[20034]: cluster 2026-03-08T23:20:50.207284+0000 mgr.x (mgr.14150) 206 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:51.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:51 vm10 bash[20034]: cluster 2026-03-08T23:20:50.207284+0000 mgr.x (mgr.14150) 206 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:52.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:51 vm04 bash[19918]: cluster 2026-03-08T23:20:50.207284+0000 mgr.x (mgr.14150) 206 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:52.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:51 vm04 bash[19918]: cluster 2026-03-08T23:20:50.207284+0000 mgr.x (mgr.14150) 206 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:53.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:53 vm10 bash[20034]: cluster 2026-03-08T23:20:52.207530+0000 mgr.x (mgr.14150) 207 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:53.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:53 vm10 bash[20034]: cluster 2026-03-08T23:20:52.207530+0000 mgr.x (mgr.14150) 207 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:54.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:53 vm04 bash[19918]: cluster 2026-03-08T23:20:52.207530+0000 mgr.x (mgr.14150) 207 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:54.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:53 vm04 bash[19918]: cluster 2026-03-08T23:20:52.207530+0000 mgr.x (mgr.14150) 207 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:53 vm02 bash[17457]: cluster 2026-03-08T23:20:52.207530+0000 mgr.x (mgr.14150) 207 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:53 vm02 bash[17457]: cluster 2026-03-08T23:20:52.207530+0000 mgr.x (mgr.14150) 207 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:55.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:55 vm10 bash[20034]: cluster 2026-03-08T23:20:54.207766+0000 mgr.x (mgr.14150) 208 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:55.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:55 vm10 bash[20034]: cluster 2026-03-08T23:20:54.207766+0000 mgr.x (mgr.14150) 208 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:56.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:55 vm04 bash[19918]: cluster 2026-03-08T23:20:54.207766+0000 mgr.x (mgr.14150) 208 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:56.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:55 vm04 bash[19918]: cluster 2026-03-08T23:20:54.207766+0000 mgr.x (mgr.14150) 208 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:56.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:55 vm02 bash[17457]: cluster 2026-03-08T23:20:54.207766+0000 mgr.x (mgr.14150) 208 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:56.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:55 vm02 bash[17457]: cluster 2026-03-08T23:20:54.207766+0000 mgr.x (mgr.14150) 208 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:56.828 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:56 vm10 bash[20034]: audit 2026-03-08T23:20:56.573239+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-08T23:20:56.828 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:56 vm10 bash[20034]: audit 2026-03-08T23:20:56.573239+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-08T23:20:56.828 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:56 vm10 bash[20034]: audit 2026-03-08T23:20:56.573751+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:56.828 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:56 vm10 bash[20034]: audit 2026-03-08T23:20:56.573751+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:56 vm04 bash[19918]: audit 2026-03-08T23:20:56.573239+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-08T23:20:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:56 vm04 bash[19918]: audit 2026-03-08T23:20:56.573239+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-08T23:20:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:56 vm04 bash[19918]: audit 2026-03-08T23:20:56.573751+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:57.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:56 vm04 bash[19918]: audit 2026-03-08T23:20:56.573751+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:56 vm02 bash[17457]: audit 2026-03-08T23:20:56.573239+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-08T23:20:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:56 vm02 bash[17457]: audit 2026-03-08T23:20:56.573239+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-08T23:20:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:56 vm02 bash[17457]: audit 2026-03-08T23:20:56.573751+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:56 vm02 bash[17457]: audit 2026-03-08T23:20:56.573751+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:20:57.372 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:20:57.372 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:20:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:20:57.632 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:20:57.632 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:20:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:20:57.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:57 vm10 bash[20034]: cluster 2026-03-08T23:20:56.207998+0000 mgr.x (mgr.14150) 209 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:57.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:57 vm10 bash[20034]: cluster 2026-03-08T23:20:56.207998+0000 mgr.x (mgr.14150) 209 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:57.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:57 vm10 bash[20034]: cephadm 2026-03-08T23:20:56.574149+0000 mgr.x (mgr.14150) 210 : cephadm [INF] Deploying daemon osd.6 on vm10 2026-03-08T23:20:57.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:57 vm10 bash[20034]: cephadm 2026-03-08T23:20:56.574149+0000 mgr.x (mgr.14150) 210 : cephadm [INF] Deploying daemon osd.6 on vm10 2026-03-08T23:20:57.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:57 vm10 bash[20034]: audit 2026-03-08T23:20:57.625775+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:57.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:57 vm10 bash[20034]: audit 2026-03-08T23:20:57.625775+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:57.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:57 vm10 bash[20034]: audit 2026-03-08T23:20:57.630495+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:57.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:57 vm10 bash[20034]: audit 2026-03-08T23:20:57.630495+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:57.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:57 vm10 bash[20034]: audit 2026-03-08T23:20:57.636544+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:57.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:57 vm10 bash[20034]: audit 2026-03-08T23:20:57.636544+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:58.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:57 vm04 bash[19918]: cluster 2026-03-08T23:20:56.207998+0000 mgr.x (mgr.14150) 209 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:58.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:57 vm04 bash[19918]: cluster 2026-03-08T23:20:56.207998+0000 mgr.x (mgr.14150) 209 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:58.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:57 vm04 bash[19918]: cephadm 2026-03-08T23:20:56.574149+0000 mgr.x (mgr.14150) 210 : cephadm [INF] Deploying daemon osd.6 on vm10 2026-03-08T23:20:58.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:57 vm04 bash[19918]: cephadm 2026-03-08T23:20:56.574149+0000 mgr.x (mgr.14150) 210 : cephadm [INF] Deploying daemon osd.6 on vm10 2026-03-08T23:20:58.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:57 vm04 bash[19918]: audit 2026-03-08T23:20:57.625775+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:58.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:57 vm04 bash[19918]: audit 2026-03-08T23:20:57.625775+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:58.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:57 vm04 bash[19918]: audit 2026-03-08T23:20:57.630495+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:58.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:57 vm04 bash[19918]: audit 2026-03-08T23:20:57.630495+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:58.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:57 vm04 bash[19918]: audit 2026-03-08T23:20:57.636544+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:58.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:57 vm04 bash[19918]: audit 2026-03-08T23:20:57.636544+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:57 vm02 bash[17457]: cluster 2026-03-08T23:20:56.207998+0000 mgr.x (mgr.14150) 209 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:57 vm02 bash[17457]: cluster 2026-03-08T23:20:56.207998+0000 mgr.x (mgr.14150) 209 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:57 vm02 bash[17457]: cephadm 2026-03-08T23:20:56.574149+0000 mgr.x (mgr.14150) 210 : cephadm [INF] Deploying daemon osd.6 on vm10 2026-03-08T23:20:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:57 vm02 bash[17457]: cephadm 2026-03-08T23:20:56.574149+0000 mgr.x (mgr.14150) 210 : cephadm [INF] Deploying daemon osd.6 on vm10 2026-03-08T23:20:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:57 vm02 bash[17457]: audit 2026-03-08T23:20:57.625775+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:57 vm02 bash[17457]: audit 2026-03-08T23:20:57.625775+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:20:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:57 vm02 bash[17457]: audit 2026-03-08T23:20:57.630495+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:57 vm02 bash[17457]: audit 2026-03-08T23:20:57.630495+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:57 vm02 bash[17457]: audit 2026-03-08T23:20:57.636544+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:57 vm02 bash[17457]: audit 2026-03-08T23:20:57.636544+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:20:59.986 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:59 vm10 bash[20034]: cluster 2026-03-08T23:20:58.208242+0000 mgr.x (mgr.14150) 211 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:20:59.987 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:20:59 vm10 bash[20034]: cluster 2026-03-08T23:20:58.208242+0000 mgr.x (mgr.14150) 211 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:00.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:59 vm04 bash[19918]: cluster 2026-03-08T23:20:58.208242+0000 mgr.x (mgr.14150) 211 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:00.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:20:59 vm04 bash[19918]: cluster 2026-03-08T23:20:58.208242+0000 mgr.x (mgr.14150) 211 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:00.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:59 vm02 bash[17457]: cluster 2026-03-08T23:20:58.208242+0000 mgr.x (mgr.14150) 211 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:00.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:20:59 vm02 bash[17457]: cluster 2026-03-08T23:20:58.208242+0000 mgr.x (mgr.14150) 211 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:02.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:01 vm04 bash[19918]: cluster 2026-03-08T23:21:00.208678+0000 mgr.x (mgr.14150) 212 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:02.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:01 vm04 bash[19918]: cluster 2026-03-08T23:21:00.208678+0000 mgr.x (mgr.14150) 212 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:02.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:01 vm04 bash[19918]: audit 2026-03-08T23:21:00.938083+0000 mon.c (mon.1) 12 : audit [INF] from='osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:21:02.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:01 vm04 bash[19918]: audit 2026-03-08T23:21:00.938083+0000 mon.c (mon.1) 12 : audit [INF] from='osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:21:02.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:01 vm04 bash[19918]: audit 2026-03-08T23:21:00.938631+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:21:02.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:01 vm04 bash[19918]: audit 2026-03-08T23:21:00.938631+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:21:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:01 vm02 bash[17457]: cluster 2026-03-08T23:21:00.208678+0000 mgr.x (mgr.14150) 212 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:01 vm02 bash[17457]: cluster 2026-03-08T23:21:00.208678+0000 mgr.x (mgr.14150) 212 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:01 vm02 bash[17457]: audit 2026-03-08T23:21:00.938083+0000 mon.c (mon.1) 12 : audit [INF] from='osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:21:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:01 vm02 bash[17457]: audit 2026-03-08T23:21:00.938083+0000 mon.c (mon.1) 12 : audit [INF] from='osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:21:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:01 vm02 bash[17457]: audit 2026-03-08T23:21:00.938631+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:21:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:01 vm02 bash[17457]: audit 2026-03-08T23:21:00.938631+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:21:02.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:01 vm10 bash[20034]: cluster 2026-03-08T23:21:00.208678+0000 mgr.x (mgr.14150) 212 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:02.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:01 vm10 bash[20034]: cluster 2026-03-08T23:21:00.208678+0000 mgr.x (mgr.14150) 212 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:02.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:01 vm10 bash[20034]: audit 2026-03-08T23:21:00.938083+0000 mon.c (mon.1) 12 : audit [INF] from='osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:21:02.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:01 vm10 bash[20034]: audit 2026-03-08T23:21:00.938083+0000 mon.c (mon.1) 12 : audit [INF] from='osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:21:02.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:01 vm10 bash[20034]: audit 2026-03-08T23:21:00.938631+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:21:02.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:01 vm10 bash[20034]: audit 2026-03-08T23:21:00.938631+0000 mon.a (mon.0) 569 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:21:03.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:02 vm04 bash[19918]: audit 2026-03-08T23:21:01.675419+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-08T23:21:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:02 vm04 bash[19918]: audit 2026-03-08T23:21:01.675419+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-08T23:21:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:02 vm04 bash[19918]: cluster 2026-03-08T23:21:01.678231+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-08T23:21:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:02 vm04 bash[19918]: cluster 2026-03-08T23:21:01.678231+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-08T23:21:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:02 vm04 bash[19918]: audit 2026-03-08T23:21:01.678452+0000 mon.c (mon.1) 13 : audit [INF] from='osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:02 vm04 bash[19918]: audit 2026-03-08T23:21:01.678452+0000 mon.c (mon.1) 13 : audit [INF] from='osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:02 vm04 bash[19918]: audit 2026-03-08T23:21:01.679072+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:02 vm04 bash[19918]: audit 2026-03-08T23:21:01.679072+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:02 vm04 bash[19918]: audit 2026-03-08T23:21:01.679156+0000 mon.a (mon.0) 573 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:02 vm04 bash[19918]: audit 2026-03-08T23:21:01.679156+0000 mon.a (mon.0) 573 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:02 vm04 bash[19918]: audit 2026-03-08T23:21:02.679900+0000 mon.a (mon.0) 574 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:21:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:02 vm04 bash[19918]: audit 2026-03-08T23:21:02.679900+0000 mon.a (mon.0) 574 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:21:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:02 vm04 bash[19918]: cluster 2026-03-08T23:21:02.682158+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-08T23:21:03.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:02 vm04 bash[19918]: cluster 2026-03-08T23:21:02.682158+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-08T23:21:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:02 vm02 bash[17457]: audit 2026-03-08T23:21:01.675419+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-08T23:21:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:02 vm02 bash[17457]: audit 2026-03-08T23:21:01.675419+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-08T23:21:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:02 vm02 bash[17457]: cluster 2026-03-08T23:21:01.678231+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-08T23:21:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:02 vm02 bash[17457]: cluster 2026-03-08T23:21:01.678231+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-08T23:21:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:02 vm02 bash[17457]: audit 2026-03-08T23:21:01.678452+0000 mon.c (mon.1) 13 : audit [INF] from='osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:02 vm02 bash[17457]: audit 2026-03-08T23:21:01.678452+0000 mon.c (mon.1) 13 : audit [INF] from='osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:02 vm02 bash[17457]: audit 2026-03-08T23:21:01.679072+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:02 vm02 bash[17457]: audit 2026-03-08T23:21:01.679072+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:02 vm02 bash[17457]: audit 2026-03-08T23:21:01.679156+0000 mon.a (mon.0) 573 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:02 vm02 bash[17457]: audit 2026-03-08T23:21:01.679156+0000 mon.a (mon.0) 573 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:02 vm02 bash[17457]: audit 2026-03-08T23:21:02.679900+0000 mon.a (mon.0) 574 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:21:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:02 vm02 bash[17457]: audit 2026-03-08T23:21:02.679900+0000 mon.a (mon.0) 574 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:21:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:02 vm02 bash[17457]: cluster 2026-03-08T23:21:02.682158+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-08T23:21:03.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:02 vm02 bash[17457]: cluster 2026-03-08T23:21:02.682158+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-08T23:21:03.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:02 vm10 bash[20034]: audit 2026-03-08T23:21:01.675419+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-08T23:21:03.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:02 vm10 bash[20034]: audit 2026-03-08T23:21:01.675419+0000 mon.a (mon.0) 570 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-08T23:21:03.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:02 vm10 bash[20034]: cluster 2026-03-08T23:21:01.678231+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-08T23:21:03.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:02 vm10 bash[20034]: cluster 2026-03-08T23:21:01.678231+0000 mon.a (mon.0) 571 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-08T23:21:03.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:02 vm10 bash[20034]: audit 2026-03-08T23:21:01.678452+0000 mon.c (mon.1) 13 : audit [INF] from='osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:03.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:02 vm10 bash[20034]: audit 2026-03-08T23:21:01.678452+0000 mon.c (mon.1) 13 : audit [INF] from='osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:03.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:02 vm10 bash[20034]: audit 2026-03-08T23:21:01.679072+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:03.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:02 vm10 bash[20034]: audit 2026-03-08T23:21:01.679072+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:03.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:02 vm10 bash[20034]: audit 2026-03-08T23:21:01.679156+0000 mon.a (mon.0) 573 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:03.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:02 vm10 bash[20034]: audit 2026-03-08T23:21:01.679156+0000 mon.a (mon.0) 573 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:03.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:02 vm10 bash[20034]: audit 2026-03-08T23:21:02.679900+0000 mon.a (mon.0) 574 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:21:03.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:02 vm10 bash[20034]: audit 2026-03-08T23:21:02.679900+0000 mon.a (mon.0) 574 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:21:03.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:02 vm10 bash[20034]: cluster 2026-03-08T23:21:02.682158+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-08T23:21:03.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:02 vm10 bash[20034]: cluster 2026-03-08T23:21:02.682158+0000 mon.a (mon.0) 575 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-08T23:21:03.914 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:03 vm10 bash[20034]: cluster 2026-03-08T23:21:02.209011+0000 mgr.x (mgr.14150) 213 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:03.915 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:03 vm10 bash[20034]: cluster 2026-03-08T23:21:02.209011+0000 mgr.x (mgr.14150) 213 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:03.915 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:03 vm10 bash[20034]: audit 2026-03-08T23:21:02.682317+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:03.915 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:03 vm10 bash[20034]: audit 2026-03-08T23:21:02.682317+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:03.915 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:03 vm10 bash[20034]: audit 2026-03-08T23:21:02.690019+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:03.915 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:03 vm10 bash[20034]: audit 2026-03-08T23:21:02.690019+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:04.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:03 vm04 bash[19918]: cluster 2026-03-08T23:21:02.209011+0000 mgr.x (mgr.14150) 213 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:04.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:03 vm04 bash[19918]: cluster 2026-03-08T23:21:02.209011+0000 mgr.x (mgr.14150) 213 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:04.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:03 vm04 bash[19918]: audit 2026-03-08T23:21:02.682317+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:04.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:03 vm04 bash[19918]: audit 2026-03-08T23:21:02.682317+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:04.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:03 vm04 bash[19918]: audit 2026-03-08T23:21:02.690019+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:04.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:03 vm04 bash[19918]: audit 2026-03-08T23:21:02.690019+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:04.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:03 vm02 bash[17457]: cluster 2026-03-08T23:21:02.209011+0000 mgr.x (mgr.14150) 213 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:04.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:03 vm02 bash[17457]: cluster 2026-03-08T23:21:02.209011+0000 mgr.x (mgr.14150) 213 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:04.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:03 vm02 bash[17457]: audit 2026-03-08T23:21:02.682317+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:04.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:03 vm02 bash[17457]: audit 2026-03-08T23:21:02.682317+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:04.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:03 vm02 bash[17457]: audit 2026-03-08T23:21:02.690019+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:04.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:03 vm02 bash[17457]: audit 2026-03-08T23:21:02.690019+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:04.744 INFO:teuthology.orchestra.run.vm10.stdout:Created osd(s) 6 on host 'vm10' 2026-03-08T23:21:04.825 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: cluster 2026-03-08T23:21:01.901472+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:21:04.825 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: cluster 2026-03-08T23:21:01.901472+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:21:04.825 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: cluster 2026-03-08T23:21:01.901563+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: cluster 2026-03-08T23:21:01.901563+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: audit 2026-03-08T23:21:03.688053+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: audit 2026-03-08T23:21:03.688053+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: cluster 2026-03-08T23:21:03.701067+0000 mon.a (mon.0) 579 : cluster [INF] osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458] boot 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: cluster 2026-03-08T23:21:03.701067+0000 mon.a (mon.0) 579 : cluster [INF] osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458] boot 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: cluster 2026-03-08T23:21:03.701143+0000 mon.a (mon.0) 580 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: cluster 2026-03-08T23:21:03.701143+0000 mon.a (mon.0) 580 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: audit 2026-03-08T23:21:03.701301+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: audit 2026-03-08T23:21:03.701301+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: audit 2026-03-08T23:21:03.747152+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: audit 2026-03-08T23:21:03.747152+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: audit 2026-03-08T23:21:03.753229+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: audit 2026-03-08T23:21:03.753229+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: audit 2026-03-08T23:21:04.164969+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: audit 2026-03-08T23:21:04.164969+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: audit 2026-03-08T23:21:04.165565+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: audit 2026-03-08T23:21:04.165565+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: audit 2026-03-08T23:21:04.169716+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:04.826 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:04 vm10 bash[20034]: audit 2026-03-08T23:21:04.169716+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:04.826 DEBUG:teuthology.orchestra.run.vm10:osd.6> sudo journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.6.service 2026-03-08T23:21:04.827 INFO:tasks.cephadm:Deploying osd.7 on vm10 with /dev/vdc... 2026-03-08T23:21:04.828 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- lvm zap /dev/vdc 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: cluster 2026-03-08T23:21:01.901472+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: cluster 2026-03-08T23:21:01.901472+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: cluster 2026-03-08T23:21:01.901563+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: cluster 2026-03-08T23:21:01.901563+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: audit 2026-03-08T23:21:03.688053+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: audit 2026-03-08T23:21:03.688053+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: cluster 2026-03-08T23:21:03.701067+0000 mon.a (mon.0) 579 : cluster [INF] osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458] boot 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: cluster 2026-03-08T23:21:03.701067+0000 mon.a (mon.0) 579 : cluster [INF] osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458] boot 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: cluster 2026-03-08T23:21:03.701143+0000 mon.a (mon.0) 580 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: cluster 2026-03-08T23:21:03.701143+0000 mon.a (mon.0) 580 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: audit 2026-03-08T23:21:03.701301+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: audit 2026-03-08T23:21:03.701301+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: audit 2026-03-08T23:21:03.747152+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: audit 2026-03-08T23:21:03.747152+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: audit 2026-03-08T23:21:03.753229+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: audit 2026-03-08T23:21:03.753229+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: audit 2026-03-08T23:21:04.164969+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: audit 2026-03-08T23:21:04.164969+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: audit 2026-03-08T23:21:04.165565+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: audit 2026-03-08T23:21:04.165565+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: audit 2026-03-08T23:21:04.169716+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:05.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:04 vm04 bash[19918]: audit 2026-03-08T23:21:04.169716+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: cluster 2026-03-08T23:21:01.901472+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: cluster 2026-03-08T23:21:01.901472+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: cluster 2026-03-08T23:21:01.901563+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: cluster 2026-03-08T23:21:01.901563+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: audit 2026-03-08T23:21:03.688053+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: audit 2026-03-08T23:21:03.688053+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: cluster 2026-03-08T23:21:03.701067+0000 mon.a (mon.0) 579 : cluster [INF] osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458] boot 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: cluster 2026-03-08T23:21:03.701067+0000 mon.a (mon.0) 579 : cluster [INF] osd.6 [v2:192.168.123.110:6808/275518458,v1:192.168.123.110:6809/275518458] boot 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: cluster 2026-03-08T23:21:03.701143+0000 mon.a (mon.0) 580 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: cluster 2026-03-08T23:21:03.701143+0000 mon.a (mon.0) 580 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: audit 2026-03-08T23:21:03.701301+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: audit 2026-03-08T23:21:03.701301+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: audit 2026-03-08T23:21:03.747152+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: audit 2026-03-08T23:21:03.747152+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: audit 2026-03-08T23:21:03.753229+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: audit 2026-03-08T23:21:03.753229+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: audit 2026-03-08T23:21:04.164969+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: audit 2026-03-08T23:21:04.164969+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: audit 2026-03-08T23:21:04.165565+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: audit 2026-03-08T23:21:04.165565+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: audit 2026-03-08T23:21:04.169716+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:05.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:04 vm02 bash[17457]: audit 2026-03-08T23:21:04.169716+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:06.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:05 vm04 bash[19918]: cluster 2026-03-08T23:21:04.209200+0000 mgr.x (mgr.14150) 214 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:06.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:05 vm04 bash[19918]: cluster 2026-03-08T23:21:04.209200+0000 mgr.x (mgr.14150) 214 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:06.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:05 vm04 bash[19918]: audit 2026-03-08T23:21:04.732327+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:06.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:05 vm04 bash[19918]: audit 2026-03-08T23:21:04.732327+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:06.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:05 vm04 bash[19918]: audit 2026-03-08T23:21:04.736577+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:06.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:05 vm04 bash[19918]: audit 2026-03-08T23:21:04.736577+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:06.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:05 vm04 bash[19918]: audit 2026-03-08T23:21:04.740509+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:06.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:05 vm04 bash[19918]: audit 2026-03-08T23:21:04.740509+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:06.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:05 vm04 bash[19918]: cluster 2026-03-08T23:21:04.764637+0000 mon.a (mon.0) 590 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-08T23:21:06.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:05 vm04 bash[19918]: cluster 2026-03-08T23:21:04.764637+0000 mon.a (mon.0) 590 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-08T23:21:06.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:05 vm02 bash[17457]: cluster 2026-03-08T23:21:04.209200+0000 mgr.x (mgr.14150) 214 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:06.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:05 vm02 bash[17457]: cluster 2026-03-08T23:21:04.209200+0000 mgr.x (mgr.14150) 214 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:06.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:05 vm02 bash[17457]: audit 2026-03-08T23:21:04.732327+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:06.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:05 vm02 bash[17457]: audit 2026-03-08T23:21:04.732327+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:06.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:05 vm02 bash[17457]: audit 2026-03-08T23:21:04.736577+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:06.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:05 vm02 bash[17457]: audit 2026-03-08T23:21:04.736577+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:06.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:05 vm02 bash[17457]: audit 2026-03-08T23:21:04.740509+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:06.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:05 vm02 bash[17457]: audit 2026-03-08T23:21:04.740509+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:06.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:05 vm02 bash[17457]: cluster 2026-03-08T23:21:04.764637+0000 mon.a (mon.0) 590 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-08T23:21:06.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:05 vm02 bash[17457]: cluster 2026-03-08T23:21:04.764637+0000 mon.a (mon.0) 590 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-08T23:21:06.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:05 vm10 bash[20034]: cluster 2026-03-08T23:21:04.209200+0000 mgr.x (mgr.14150) 214 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:06.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:05 vm10 bash[20034]: cluster 2026-03-08T23:21:04.209200+0000 mgr.x (mgr.14150) 214 : cluster [DBG] pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:21:06.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:05 vm10 bash[20034]: audit 2026-03-08T23:21:04.732327+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:06.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:05 vm10 bash[20034]: audit 2026-03-08T23:21:04.732327+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:06.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:05 vm10 bash[20034]: audit 2026-03-08T23:21:04.736577+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:06.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:05 vm10 bash[20034]: audit 2026-03-08T23:21:04.736577+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:06.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:05 vm10 bash[20034]: audit 2026-03-08T23:21:04.740509+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:06.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:05 vm10 bash[20034]: audit 2026-03-08T23:21:04.740509+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:06.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:05 vm10 bash[20034]: cluster 2026-03-08T23:21:04.764637+0000 mon.a (mon.0) 590 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-08T23:21:06.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:05 vm10 bash[20034]: cluster 2026-03-08T23:21:04.764637+0000 mon.a (mon.0) 590 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-08T23:21:08.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:07 vm04 bash[19918]: cluster 2026-03-08T23:21:06.209447+0000 mgr.x (mgr.14150) 215 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:08.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:07 vm04 bash[19918]: cluster 2026-03-08T23:21:06.209447+0000 mgr.x (mgr.14150) 215 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:08.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:07 vm02 bash[17457]: cluster 2026-03-08T23:21:06.209447+0000 mgr.x (mgr.14150) 215 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:08.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:07 vm02 bash[17457]: cluster 2026-03-08T23:21:06.209447+0000 mgr.x (mgr.14150) 215 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:08.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:07 vm10 bash[20034]: cluster 2026-03-08T23:21:06.209447+0000 mgr.x (mgr.14150) 215 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:08.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:07 vm10 bash[20034]: cluster 2026-03-08T23:21:06.209447+0000 mgr.x (mgr.14150) 215 : cluster [DBG] pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:09.494 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.c/config 2026-03-08T23:21:09.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:09 vm10 bash[20034]: cluster 2026-03-08T23:21:08.209826+0000 mgr.x (mgr.14150) 216 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:09.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:09 vm10 bash[20034]: cluster 2026-03-08T23:21:08.209826+0000 mgr.x (mgr.14150) 216 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:10.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:09 vm04 bash[19918]: cluster 2026-03-08T23:21:08.209826+0000 mgr.x (mgr.14150) 216 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:10.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:09 vm04 bash[19918]: cluster 2026-03-08T23:21:08.209826+0000 mgr.x (mgr.14150) 216 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:10.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:09 vm02 bash[17457]: cluster 2026-03-08T23:21:08.209826+0000 mgr.x (mgr.14150) 216 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:10.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:09 vm02 bash[17457]: cluster 2026-03-08T23:21:08.209826+0000 mgr.x (mgr.14150) 216 : cluster [DBG] pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:11.021 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:21:11.036 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph orch daemon add osd vm10:/dev/vdc 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: cluster 2026-03-08T23:21:10.210257+0000 mgr.x (mgr.14150) 217 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: cluster 2026-03-08T23:21:10.210257+0000 mgr.x (mgr.14150) 217 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: cephadm 2026-03-08T23:21:10.278046+0000 mgr.x (mgr.14150) 218 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: cephadm 2026-03-08T23:21:10.278046+0000 mgr.x (mgr.14150) 218 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: audit 2026-03-08T23:21:10.283299+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: audit 2026-03-08T23:21:10.283299+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: audit 2026-03-08T23:21:10.287312+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: audit 2026-03-08T23:21:10.287312+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: audit 2026-03-08T23:21:10.288056+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: audit 2026-03-08T23:21:10.288056+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: audit 2026-03-08T23:21:10.288567+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: audit 2026-03-08T23:21:10.288567+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: cephadm 2026-03-08T23:21:10.288855+0000 mgr.x (mgr.14150) 219 : cephadm [INF] Adjusting osd_memory_target on vm10 to 2275M 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: cephadm 2026-03-08T23:21:10.288855+0000 mgr.x (mgr.14150) 219 : cephadm [INF] Adjusting osd_memory_target on vm10 to 2275M 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: audit 2026-03-08T23:21:10.291862+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: audit 2026-03-08T23:21:10.291862+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: audit 2026-03-08T23:21:10.293251+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: audit 2026-03-08T23:21:10.293251+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: audit 2026-03-08T23:21:10.293645+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: audit 2026-03-08T23:21:10.293645+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: audit 2026-03-08T23:21:10.297789+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.292 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:11 vm10 bash[20034]: audit 2026-03-08T23:21:10.297789+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: cluster 2026-03-08T23:21:10.210257+0000 mgr.x (mgr.14150) 217 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: cluster 2026-03-08T23:21:10.210257+0000 mgr.x (mgr.14150) 217 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: cephadm 2026-03-08T23:21:10.278046+0000 mgr.x (mgr.14150) 218 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: cephadm 2026-03-08T23:21:10.278046+0000 mgr.x (mgr.14150) 218 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: audit 2026-03-08T23:21:10.283299+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: audit 2026-03-08T23:21:10.283299+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: audit 2026-03-08T23:21:10.287312+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: audit 2026-03-08T23:21:10.287312+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: audit 2026-03-08T23:21:10.288056+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: audit 2026-03-08T23:21:10.288056+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: audit 2026-03-08T23:21:10.288567+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: audit 2026-03-08T23:21:10.288567+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: cephadm 2026-03-08T23:21:10.288855+0000 mgr.x (mgr.14150) 219 : cephadm [INF] Adjusting osd_memory_target on vm10 to 2275M 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: cephadm 2026-03-08T23:21:10.288855+0000 mgr.x (mgr.14150) 219 : cephadm [INF] Adjusting osd_memory_target on vm10 to 2275M 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: audit 2026-03-08T23:21:10.291862+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: audit 2026-03-08T23:21:10.291862+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: audit 2026-03-08T23:21:10.293251+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: audit 2026-03-08T23:21:10.293251+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: audit 2026-03-08T23:21:10.293645+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: audit 2026-03-08T23:21:10.293645+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: audit 2026-03-08T23:21:10.297789+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:11 vm04 bash[19918]: audit 2026-03-08T23:21:10.297789+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: cluster 2026-03-08T23:21:10.210257+0000 mgr.x (mgr.14150) 217 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: cluster 2026-03-08T23:21:10.210257+0000 mgr.x (mgr.14150) 217 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: cephadm 2026-03-08T23:21:10.278046+0000 mgr.x (mgr.14150) 218 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: cephadm 2026-03-08T23:21:10.278046+0000 mgr.x (mgr.14150) 218 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: audit 2026-03-08T23:21:10.283299+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: audit 2026-03-08T23:21:10.283299+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: audit 2026-03-08T23:21:10.287312+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: audit 2026-03-08T23:21:10.287312+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: audit 2026-03-08T23:21:10.288056+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: audit 2026-03-08T23:21:10.288056+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: audit 2026-03-08T23:21:10.288567+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: audit 2026-03-08T23:21:10.288567+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: cephadm 2026-03-08T23:21:10.288855+0000 mgr.x (mgr.14150) 219 : cephadm [INF] Adjusting osd_memory_target on vm10 to 2275M 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: cephadm 2026-03-08T23:21:10.288855+0000 mgr.x (mgr.14150) 219 : cephadm [INF] Adjusting osd_memory_target on vm10 to 2275M 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: audit 2026-03-08T23:21:10.291862+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: audit 2026-03-08T23:21:10.291862+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: audit 2026-03-08T23:21:10.293251+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: audit 2026-03-08T23:21:10.293251+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: audit 2026-03-08T23:21:10.293645+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: audit 2026-03-08T23:21:10.293645+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: audit 2026-03-08T23:21:10.297789+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:11 vm02 bash[17457]: audit 2026-03-08T23:21:10.297789+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:13.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:13 vm04 bash[19918]: cluster 2026-03-08T23:21:12.210656+0000 mgr.x (mgr.14150) 220 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:13.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:13 vm04 bash[19918]: cluster 2026-03-08T23:21:12.210656+0000 mgr.x (mgr.14150) 220 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:13.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:13 vm02 bash[17457]: cluster 2026-03-08T23:21:12.210656+0000 mgr.x (mgr.14150) 220 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:13.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:13 vm02 bash[17457]: cluster 2026-03-08T23:21:12.210656+0000 mgr.x (mgr.14150) 220 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:13.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:13 vm10 bash[20034]: cluster 2026-03-08T23:21:12.210656+0000 mgr.x (mgr.14150) 220 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:13.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:13 vm10 bash[20034]: cluster 2026-03-08T23:21:12.210656+0000 mgr.x (mgr.14150) 220 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:15.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:15 vm04 bash[19918]: cluster 2026-03-08T23:21:14.210992+0000 mgr.x (mgr.14150) 221 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:15.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:15 vm04 bash[19918]: cluster 2026-03-08T23:21:14.210992+0000 mgr.x (mgr.14150) 221 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:15.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:15 vm02 bash[17457]: cluster 2026-03-08T23:21:14.210992+0000 mgr.x (mgr.14150) 221 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:15.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:15 vm02 bash[17457]: cluster 2026-03-08T23:21:14.210992+0000 mgr.x (mgr.14150) 221 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:15.648 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.c/config 2026-03-08T23:21:15.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:15 vm10 bash[20034]: cluster 2026-03-08T23:21:14.210992+0000 mgr.x (mgr.14150) 221 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:15.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:15 vm10 bash[20034]: cluster 2026-03-08T23:21:14.210992+0000 mgr.x (mgr.14150) 221 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:16.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:16 vm04 bash[19918]: audit 2026-03-08T23:21:15.887844+0000 mgr.x (mgr.14150) 222 : audit [DBG] from='client.24268 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:21:16.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:16 vm04 bash[19918]: audit 2026-03-08T23:21:15.887844+0000 mgr.x (mgr.14150) 222 : audit [DBG] from='client.24268 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:21:16.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:16 vm04 bash[19918]: audit 2026-03-08T23:21:15.889080+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:21:16.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:16 vm04 bash[19918]: audit 2026-03-08T23:21:15.889080+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:21:16.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:16 vm04 bash[19918]: audit 2026-03-08T23:21:15.890359+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:21:16.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:16 vm04 bash[19918]: audit 2026-03-08T23:21:15.890359+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:21:16.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:16 vm04 bash[19918]: audit 2026-03-08T23:21:15.890924+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:16.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:16 vm04 bash[19918]: audit 2026-03-08T23:21:15.890924+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:16.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:16 vm02 bash[17457]: audit 2026-03-08T23:21:15.887844+0000 mgr.x (mgr.14150) 222 : audit [DBG] from='client.24268 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:21:16.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:16 vm02 bash[17457]: audit 2026-03-08T23:21:15.887844+0000 mgr.x (mgr.14150) 222 : audit [DBG] from='client.24268 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:21:16.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:16 vm02 bash[17457]: audit 2026-03-08T23:21:15.889080+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:21:16.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:16 vm02 bash[17457]: audit 2026-03-08T23:21:15.889080+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:21:16.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:16 vm02 bash[17457]: audit 2026-03-08T23:21:15.890359+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:21:16.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:16 vm02 bash[17457]: audit 2026-03-08T23:21:15.890359+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:21:16.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:16 vm02 bash[17457]: audit 2026-03-08T23:21:15.890924+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:16.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:16 vm02 bash[17457]: audit 2026-03-08T23:21:15.890924+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:16.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:16 vm10 bash[20034]: audit 2026-03-08T23:21:15.887844+0000 mgr.x (mgr.14150) 222 : audit [DBG] from='client.24268 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:21:16.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:16 vm10 bash[20034]: audit 2026-03-08T23:21:15.887844+0000 mgr.x (mgr.14150) 222 : audit [DBG] from='client.24268 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm10:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:21:16.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:16 vm10 bash[20034]: audit 2026-03-08T23:21:15.889080+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:21:16.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:16 vm10 bash[20034]: audit 2026-03-08T23:21:15.889080+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:21:16.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:16 vm10 bash[20034]: audit 2026-03-08T23:21:15.890359+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:21:16.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:16 vm10 bash[20034]: audit 2026-03-08T23:21:15.890359+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:21:16.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:16 vm10 bash[20034]: audit 2026-03-08T23:21:15.890924+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:16.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:16 vm10 bash[20034]: audit 2026-03-08T23:21:15.890924+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:17.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:17 vm04 bash[19918]: cluster 2026-03-08T23:21:16.211235+0000 mgr.x (mgr.14150) 223 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:17.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:17 vm04 bash[19918]: cluster 2026-03-08T23:21:16.211235+0000 mgr.x (mgr.14150) 223 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:17.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:17 vm02 bash[17457]: cluster 2026-03-08T23:21:16.211235+0000 mgr.x (mgr.14150) 223 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:17.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:17 vm02 bash[17457]: cluster 2026-03-08T23:21:16.211235+0000 mgr.x (mgr.14150) 223 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:17.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:17 vm10 bash[20034]: cluster 2026-03-08T23:21:16.211235+0000 mgr.x (mgr.14150) 223 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:17.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:17 vm10 bash[20034]: cluster 2026-03-08T23:21:16.211235+0000 mgr.x (mgr.14150) 223 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:19.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:19 vm04 bash[19918]: cluster 2026-03-08T23:21:18.211530+0000 mgr.x (mgr.14150) 224 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:19.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:19 vm04 bash[19918]: cluster 2026-03-08T23:21:18.211530+0000 mgr.x (mgr.14150) 224 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:19.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:19 vm02 bash[17457]: cluster 2026-03-08T23:21:18.211530+0000 mgr.x (mgr.14150) 224 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:19.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:19 vm02 bash[17457]: cluster 2026-03-08T23:21:18.211530+0000 mgr.x (mgr.14150) 224 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:19.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:19 vm10 bash[20034]: cluster 2026-03-08T23:21:18.211530+0000 mgr.x (mgr.14150) 224 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:19.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:19 vm10 bash[20034]: cluster 2026-03-08T23:21:18.211530+0000 mgr.x (mgr.14150) 224 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:20.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:20 vm04 bash[19918]: audit 2026-03-08T23:21:20.260655+0000 mon.c (mon.1) 14 : audit [INF] from='client.? 192.168.123.110:0/157239762' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]: dispatch 2026-03-08T23:21:20.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:20 vm04 bash[19918]: audit 2026-03-08T23:21:20.260655+0000 mon.c (mon.1) 14 : audit [INF] from='client.? 192.168.123.110:0/157239762' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]: dispatch 2026-03-08T23:21:20.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:20 vm04 bash[19918]: audit 2026-03-08T23:21:20.261161+0000 mon.a (mon.0) 602 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]: dispatch 2026-03-08T23:21:20.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:20 vm04 bash[19918]: audit 2026-03-08T23:21:20.261161+0000 mon.a (mon.0) 602 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]: dispatch 2026-03-08T23:21:20.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:20 vm04 bash[19918]: audit 2026-03-08T23:21:20.264462+0000 mon.a (mon.0) 603 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]': finished 2026-03-08T23:21:20.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:20 vm04 bash[19918]: audit 2026-03-08T23:21:20.264462+0000 mon.a (mon.0) 603 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]': finished 2026-03-08T23:21:20.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:20 vm04 bash[19918]: cluster 2026-03-08T23:21:20.267187+0000 mon.a (mon.0) 604 : cluster [DBG] osdmap e44: 8 total, 7 up, 8 in 2026-03-08T23:21:20.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:20 vm04 bash[19918]: cluster 2026-03-08T23:21:20.267187+0000 mon.a (mon.0) 604 : cluster [DBG] osdmap e44: 8 total, 7 up, 8 in 2026-03-08T23:21:20.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:20 vm04 bash[19918]: audit 2026-03-08T23:21:20.267317+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:20.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:20 vm04 bash[19918]: audit 2026-03-08T23:21:20.267317+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:20.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:20 vm02 bash[17457]: audit 2026-03-08T23:21:20.260655+0000 mon.c (mon.1) 14 : audit [INF] from='client.? 192.168.123.110:0/157239762' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]: dispatch 2026-03-08T23:21:20.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:20 vm02 bash[17457]: audit 2026-03-08T23:21:20.260655+0000 mon.c (mon.1) 14 : audit [INF] from='client.? 192.168.123.110:0/157239762' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]: dispatch 2026-03-08T23:21:20.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:20 vm02 bash[17457]: audit 2026-03-08T23:21:20.261161+0000 mon.a (mon.0) 602 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]: dispatch 2026-03-08T23:21:20.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:20 vm02 bash[17457]: audit 2026-03-08T23:21:20.261161+0000 mon.a (mon.0) 602 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]: dispatch 2026-03-08T23:21:20.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:20 vm02 bash[17457]: audit 2026-03-08T23:21:20.264462+0000 mon.a (mon.0) 603 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]': finished 2026-03-08T23:21:20.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:20 vm02 bash[17457]: audit 2026-03-08T23:21:20.264462+0000 mon.a (mon.0) 603 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]': finished 2026-03-08T23:21:20.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:20 vm02 bash[17457]: cluster 2026-03-08T23:21:20.267187+0000 mon.a (mon.0) 604 : cluster [DBG] osdmap e44: 8 total, 7 up, 8 in 2026-03-08T23:21:20.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:20 vm02 bash[17457]: cluster 2026-03-08T23:21:20.267187+0000 mon.a (mon.0) 604 : cluster [DBG] osdmap e44: 8 total, 7 up, 8 in 2026-03-08T23:21:20.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:20 vm02 bash[17457]: audit 2026-03-08T23:21:20.267317+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:20.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:20 vm02 bash[17457]: audit 2026-03-08T23:21:20.267317+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:20.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:20 vm10 bash[20034]: audit 2026-03-08T23:21:20.260655+0000 mon.c (mon.1) 14 : audit [INF] from='client.? 192.168.123.110:0/157239762' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]: dispatch 2026-03-08T23:21:20.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:20 vm10 bash[20034]: audit 2026-03-08T23:21:20.260655+0000 mon.c (mon.1) 14 : audit [INF] from='client.? 192.168.123.110:0/157239762' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]: dispatch 2026-03-08T23:21:20.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:20 vm10 bash[20034]: audit 2026-03-08T23:21:20.261161+0000 mon.a (mon.0) 602 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]: dispatch 2026-03-08T23:21:20.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:20 vm10 bash[20034]: audit 2026-03-08T23:21:20.261161+0000 mon.a (mon.0) 602 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]: dispatch 2026-03-08T23:21:20.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:20 vm10 bash[20034]: audit 2026-03-08T23:21:20.264462+0000 mon.a (mon.0) 603 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]': finished 2026-03-08T23:21:20.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:20 vm10 bash[20034]: audit 2026-03-08T23:21:20.264462+0000 mon.a (mon.0) 603 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "aef086d3-44c6-4078-a3ac-f3b6f3a98df9"}]': finished 2026-03-08T23:21:20.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:20 vm10 bash[20034]: cluster 2026-03-08T23:21:20.267187+0000 mon.a (mon.0) 604 : cluster [DBG] osdmap e44: 8 total, 7 up, 8 in 2026-03-08T23:21:20.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:20 vm10 bash[20034]: cluster 2026-03-08T23:21:20.267187+0000 mon.a (mon.0) 604 : cluster [DBG] osdmap e44: 8 total, 7 up, 8 in 2026-03-08T23:21:20.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:20 vm10 bash[20034]: audit 2026-03-08T23:21:20.267317+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:20.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:20 vm10 bash[20034]: audit 2026-03-08T23:21:20.267317+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:21.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:21 vm04 bash[19918]: cluster 2026-03-08T23:21:20.211867+0000 mgr.x (mgr.14150) 225 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:21.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:21 vm04 bash[19918]: cluster 2026-03-08T23:21:20.211867+0000 mgr.x (mgr.14150) 225 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:21.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:21 vm04 bash[19918]: audit 2026-03-08T23:21:20.881520+0000 mon.a (mon.0) 606 : audit [DBG] from='client.? 192.168.123.110:0/115083111' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:21:21.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:21 vm04 bash[19918]: audit 2026-03-08T23:21:20.881520+0000 mon.a (mon.0) 606 : audit [DBG] from='client.? 192.168.123.110:0/115083111' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:21:21.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:21 vm02 bash[17457]: cluster 2026-03-08T23:21:20.211867+0000 mgr.x (mgr.14150) 225 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:21.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:21 vm02 bash[17457]: cluster 2026-03-08T23:21:20.211867+0000 mgr.x (mgr.14150) 225 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:21.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:21 vm02 bash[17457]: audit 2026-03-08T23:21:20.881520+0000 mon.a (mon.0) 606 : audit [DBG] from='client.? 192.168.123.110:0/115083111' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:21:21.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:21 vm02 bash[17457]: audit 2026-03-08T23:21:20.881520+0000 mon.a (mon.0) 606 : audit [DBG] from='client.? 192.168.123.110:0/115083111' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:21:21.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:21 vm10 bash[20034]: cluster 2026-03-08T23:21:20.211867+0000 mgr.x (mgr.14150) 225 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:21.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:21 vm10 bash[20034]: cluster 2026-03-08T23:21:20.211867+0000 mgr.x (mgr.14150) 225 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:21.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:21 vm10 bash[20034]: audit 2026-03-08T23:21:20.881520+0000 mon.a (mon.0) 606 : audit [DBG] from='client.? 192.168.123.110:0/115083111' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:21:21.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:21 vm10 bash[20034]: audit 2026-03-08T23:21:20.881520+0000 mon.a (mon.0) 606 : audit [DBG] from='client.? 192.168.123.110:0/115083111' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:21:23.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:23 vm04 bash[19918]: cluster 2026-03-08T23:21:22.212103+0000 mgr.x (mgr.14150) 226 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:23.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:23 vm04 bash[19918]: cluster 2026-03-08T23:21:22.212103+0000 mgr.x (mgr.14150) 226 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:23.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:23 vm02 bash[17457]: cluster 2026-03-08T23:21:22.212103+0000 mgr.x (mgr.14150) 226 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:23.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:23 vm02 bash[17457]: cluster 2026-03-08T23:21:22.212103+0000 mgr.x (mgr.14150) 226 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:23.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:23 vm10 bash[20034]: cluster 2026-03-08T23:21:22.212103+0000 mgr.x (mgr.14150) 226 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:23.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:23 vm10 bash[20034]: cluster 2026-03-08T23:21:22.212103+0000 mgr.x (mgr.14150) 226 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:25.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:25 vm04 bash[19918]: cluster 2026-03-08T23:21:24.212349+0000 mgr.x (mgr.14150) 227 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:25.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:25 vm04 bash[19918]: cluster 2026-03-08T23:21:24.212349+0000 mgr.x (mgr.14150) 227 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:25 vm02 bash[17457]: cluster 2026-03-08T23:21:24.212349+0000 mgr.x (mgr.14150) 227 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:25 vm02 bash[17457]: cluster 2026-03-08T23:21:24.212349+0000 mgr.x (mgr.14150) 227 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:25.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:25 vm10 bash[20034]: cluster 2026-03-08T23:21:24.212349+0000 mgr.x (mgr.14150) 227 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:25.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:25 vm10 bash[20034]: cluster 2026-03-08T23:21:24.212349+0000 mgr.x (mgr.14150) 227 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:27.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:27 vm04 bash[19918]: cluster 2026-03-08T23:21:26.212605+0000 mgr.x (mgr.14150) 228 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:27.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:27 vm04 bash[19918]: cluster 2026-03-08T23:21:26.212605+0000 mgr.x (mgr.14150) 228 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:27 vm02 bash[17457]: cluster 2026-03-08T23:21:26.212605+0000 mgr.x (mgr.14150) 228 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:27 vm02 bash[17457]: cluster 2026-03-08T23:21:26.212605+0000 mgr.x (mgr.14150) 228 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:27 vm10 bash[20034]: cluster 2026-03-08T23:21:26.212605+0000 mgr.x (mgr.14150) 228 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:27 vm10 bash[20034]: cluster 2026-03-08T23:21:26.212605+0000 mgr.x (mgr.14150) 228 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:29.506 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:29 vm10 bash[20034]: cluster 2026-03-08T23:21:28.212824+0000 mgr.x (mgr.14150) 229 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:29.506 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:29 vm10 bash[20034]: cluster 2026-03-08T23:21:28.212824+0000 mgr.x (mgr.14150) 229 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:29.506 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:29 vm10 bash[20034]: audit 2026-03-08T23:21:29.268423+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-08T23:21:29.506 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:29 vm10 bash[20034]: audit 2026-03-08T23:21:29.268423+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-08T23:21:29.506 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:29 vm10 bash[20034]: audit 2026-03-08T23:21:29.268890+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:29.506 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:29 vm10 bash[20034]: audit 2026-03-08T23:21:29.268890+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:29.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:29 vm04 bash[19918]: cluster 2026-03-08T23:21:28.212824+0000 mgr.x (mgr.14150) 229 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:29.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:29 vm04 bash[19918]: cluster 2026-03-08T23:21:28.212824+0000 mgr.x (mgr.14150) 229 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:29.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:29 vm04 bash[19918]: audit 2026-03-08T23:21:29.268423+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-08T23:21:29.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:29 vm04 bash[19918]: audit 2026-03-08T23:21:29.268423+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-08T23:21:29.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:29 vm04 bash[19918]: audit 2026-03-08T23:21:29.268890+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:29.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:29 vm04 bash[19918]: audit 2026-03-08T23:21:29.268890+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:29.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:29 vm02 bash[17457]: cluster 2026-03-08T23:21:28.212824+0000 mgr.x (mgr.14150) 229 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:29.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:29 vm02 bash[17457]: cluster 2026-03-08T23:21:28.212824+0000 mgr.x (mgr.14150) 229 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:29.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:29 vm02 bash[17457]: audit 2026-03-08T23:21:29.268423+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-08T23:21:29.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:29 vm02 bash[17457]: audit 2026-03-08T23:21:29.268423+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-08T23:21:29.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:29 vm02 bash[17457]: audit 2026-03-08T23:21:29.268890+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:29.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:29 vm02 bash[17457]: audit 2026-03-08T23:21:29.268890+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:30.351 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:30 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:21:30.351 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:30 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:21:30.351 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:30 vm10 bash[20034]: cephadm 2026-03-08T23:21:29.269243+0000 mgr.x (mgr.14150) 230 : cephadm [INF] Deploying daemon osd.7 on vm10 2026-03-08T23:21:30.351 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:30 vm10 bash[20034]: cephadm 2026-03-08T23:21:29.269243+0000 mgr.x (mgr.14150) 230 : cephadm [INF] Deploying daemon osd.7 on vm10 2026-03-08T23:21:30.351 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:21:30 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:21:30.351 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:21:30 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:21:30.351 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:21:30 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:21:30.351 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:21:30 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:21:30.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:30 vm04 bash[19918]: cephadm 2026-03-08T23:21:29.269243+0000 mgr.x (mgr.14150) 230 : cephadm [INF] Deploying daemon osd.7 on vm10 2026-03-08T23:21:30.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:30 vm04 bash[19918]: cephadm 2026-03-08T23:21:29.269243+0000 mgr.x (mgr.14150) 230 : cephadm [INF] Deploying daemon osd.7 on vm10 2026-03-08T23:21:30.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:30 vm02 bash[17457]: cephadm 2026-03-08T23:21:29.269243+0000 mgr.x (mgr.14150) 230 : cephadm [INF] Deploying daemon osd.7 on vm10 2026-03-08T23:21:30.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:30 vm02 bash[17457]: cephadm 2026-03-08T23:21:29.269243+0000 mgr.x (mgr.14150) 230 : cephadm [INF] Deploying daemon osd.7 on vm10 2026-03-08T23:21:31.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:31 vm04 bash[19918]: cluster 2026-03-08T23:21:30.213047+0000 mgr.x (mgr.14150) 231 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:31.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:31 vm04 bash[19918]: cluster 2026-03-08T23:21:30.213047+0000 mgr.x (mgr.14150) 231 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:31.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:31 vm04 bash[19918]: audit 2026-03-08T23:21:30.383553+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:31.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:31 vm04 bash[19918]: audit 2026-03-08T23:21:30.383553+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:31.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:31 vm04 bash[19918]: audit 2026-03-08T23:21:30.387983+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:31.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:31 vm04 bash[19918]: audit 2026-03-08T23:21:30.387983+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:31.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:31 vm04 bash[19918]: audit 2026-03-08T23:21:30.392372+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:31.625 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:31 vm04 bash[19918]: audit 2026-03-08T23:21:30.392372+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:31.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:31 vm02 bash[17457]: cluster 2026-03-08T23:21:30.213047+0000 mgr.x (mgr.14150) 231 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:31.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:31 vm02 bash[17457]: cluster 2026-03-08T23:21:30.213047+0000 mgr.x (mgr.14150) 231 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:31.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:31 vm02 bash[17457]: audit 2026-03-08T23:21:30.383553+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:31.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:31 vm02 bash[17457]: audit 2026-03-08T23:21:30.383553+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:31.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:31 vm02 bash[17457]: audit 2026-03-08T23:21:30.387983+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:31.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:31 vm02 bash[17457]: audit 2026-03-08T23:21:30.387983+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:31.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:31 vm02 bash[17457]: audit 2026-03-08T23:21:30.392372+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:31.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:31 vm02 bash[17457]: audit 2026-03-08T23:21:30.392372+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:31.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:31 vm10 bash[20034]: cluster 2026-03-08T23:21:30.213047+0000 mgr.x (mgr.14150) 231 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:31.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:31 vm10 bash[20034]: cluster 2026-03-08T23:21:30.213047+0000 mgr.x (mgr.14150) 231 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:31.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:31 vm10 bash[20034]: audit 2026-03-08T23:21:30.383553+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:31.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:31 vm10 bash[20034]: audit 2026-03-08T23:21:30.383553+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:31.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:31 vm10 bash[20034]: audit 2026-03-08T23:21:30.387983+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:31.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:31 vm10 bash[20034]: audit 2026-03-08T23:21:30.387983+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:31.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:31 vm10 bash[20034]: audit 2026-03-08T23:21:30.392372+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:31.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:31 vm10 bash[20034]: audit 2026-03-08T23:21:30.392372+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:33.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:33 vm04 bash[19918]: cluster 2026-03-08T23:21:32.213272+0000 mgr.x (mgr.14150) 232 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:33.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:33 vm04 bash[19918]: cluster 2026-03-08T23:21:32.213272+0000 mgr.x (mgr.14150) 232 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:33.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:33 vm02 bash[17457]: cluster 2026-03-08T23:21:32.213272+0000 mgr.x (mgr.14150) 232 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:33.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:33 vm02 bash[17457]: cluster 2026-03-08T23:21:32.213272+0000 mgr.x (mgr.14150) 232 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:33.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:33 vm10 bash[20034]: cluster 2026-03-08T23:21:32.213272+0000 mgr.x (mgr.14150) 232 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:33.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:33 vm10 bash[20034]: cluster 2026-03-08T23:21:32.213272+0000 mgr.x (mgr.14150) 232 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:34.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:34 vm04 bash[19918]: audit 2026-03-08T23:21:33.832455+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:21:34.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:34 vm04 bash[19918]: audit 2026-03-08T23:21:33.832455+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:21:34.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:34 vm02 bash[17457]: audit 2026-03-08T23:21:33.832455+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:21:34.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:34 vm02 bash[17457]: audit 2026-03-08T23:21:33.832455+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:21:34.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:34 vm10 bash[20034]: audit 2026-03-08T23:21:33.832455+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:21:34.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:34 vm10 bash[20034]: audit 2026-03-08T23:21:33.832455+0000 mon.a (mon.0) 612 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:21:35.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:35 vm02 bash[17457]: cluster 2026-03-08T23:21:34.213535+0000 mgr.x (mgr.14150) 233 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:35.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:35 vm02 bash[17457]: cluster 2026-03-08T23:21:34.213535+0000 mgr.x (mgr.14150) 233 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:35.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:35 vm02 bash[17457]: audit 2026-03-08T23:21:34.366692+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-08T23:21:35.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:35 vm02 bash[17457]: audit 2026-03-08T23:21:34.366692+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-08T23:21:35.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:35 vm02 bash[17457]: cluster 2026-03-08T23:21:34.368606+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-08T23:21:35.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:35 vm02 bash[17457]: cluster 2026-03-08T23:21:34.368606+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-08T23:21:35.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:35 vm02 bash[17457]: audit 2026-03-08T23:21:34.368762+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:35.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:35 vm02 bash[17457]: audit 2026-03-08T23:21:34.368762+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:35.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:35 vm02 bash[17457]: audit 2026-03-08T23:21:34.368868+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:35.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:35 vm02 bash[17457]: audit 2026-03-08T23:21:34.368868+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:35.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:35 vm10 bash[20034]: cluster 2026-03-08T23:21:34.213535+0000 mgr.x (mgr.14150) 233 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:35.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:35 vm10 bash[20034]: cluster 2026-03-08T23:21:34.213535+0000 mgr.x (mgr.14150) 233 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:35.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:35 vm10 bash[20034]: audit 2026-03-08T23:21:34.366692+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-08T23:21:35.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:35 vm10 bash[20034]: audit 2026-03-08T23:21:34.366692+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-08T23:21:35.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:35 vm10 bash[20034]: cluster 2026-03-08T23:21:34.368606+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-08T23:21:35.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:35 vm10 bash[20034]: cluster 2026-03-08T23:21:34.368606+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-08T23:21:35.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:35 vm10 bash[20034]: audit 2026-03-08T23:21:34.368762+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:35.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:35 vm10 bash[20034]: audit 2026-03-08T23:21:34.368762+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:35.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:35 vm10 bash[20034]: audit 2026-03-08T23:21:34.368868+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:35.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:35 vm10 bash[20034]: audit 2026-03-08T23:21:34.368868+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:35.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:35 vm04 bash[19918]: cluster 2026-03-08T23:21:34.213535+0000 mgr.x (mgr.14150) 233 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:35.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:35 vm04 bash[19918]: cluster 2026-03-08T23:21:34.213535+0000 mgr.x (mgr.14150) 233 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:35.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:35 vm04 bash[19918]: audit 2026-03-08T23:21:34.366692+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-08T23:21:35.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:35 vm04 bash[19918]: audit 2026-03-08T23:21:34.366692+0000 mon.a (mon.0) 613 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-08T23:21:35.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:35 vm04 bash[19918]: cluster 2026-03-08T23:21:34.368606+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-08T23:21:35.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:35 vm04 bash[19918]: cluster 2026-03-08T23:21:34.368606+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-08T23:21:35.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:35 vm04 bash[19918]: audit 2026-03-08T23:21:34.368762+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:35.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:35 vm04 bash[19918]: audit 2026-03-08T23:21:34.368762+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]: dispatch 2026-03-08T23:21:35.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:35 vm04 bash[19918]: audit 2026-03-08T23:21:34.368868+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:35.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:35 vm04 bash[19918]: audit 2026-03-08T23:21:34.368868+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:36 vm10 bash[20034]: audit 2026-03-08T23:21:35.369711+0000 mon.a (mon.0) 617 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:21:36.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:36 vm10 bash[20034]: audit 2026-03-08T23:21:35.369711+0000 mon.a (mon.0) 617 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:21:36.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:36 vm10 bash[20034]: cluster 2026-03-08T23:21:35.372015+0000 mon.a (mon.0) 618 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-08T23:21:36.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:36 vm10 bash[20034]: cluster 2026-03-08T23:21:35.372015+0000 mon.a (mon.0) 618 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-08T23:21:36.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:36 vm10 bash[20034]: audit 2026-03-08T23:21:35.376852+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:36 vm10 bash[20034]: audit 2026-03-08T23:21:35.376852+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:36 vm10 bash[20034]: audit 2026-03-08T23:21:35.379662+0000 mon.a (mon.0) 620 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:36 vm10 bash[20034]: audit 2026-03-08T23:21:35.379662+0000 mon.a (mon.0) 620 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:36 vm10 bash[20034]: audit 2026-03-08T23:21:36.379178+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:36 vm10 bash[20034]: audit 2026-03-08T23:21:36.379178+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:36 vm04 bash[19918]: audit 2026-03-08T23:21:35.369711+0000 mon.a (mon.0) 617 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:21:36.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:36 vm04 bash[19918]: audit 2026-03-08T23:21:35.369711+0000 mon.a (mon.0) 617 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:21:36.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:36 vm04 bash[19918]: cluster 2026-03-08T23:21:35.372015+0000 mon.a (mon.0) 618 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-08T23:21:36.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:36 vm04 bash[19918]: cluster 2026-03-08T23:21:35.372015+0000 mon.a (mon.0) 618 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-08T23:21:36.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:36 vm04 bash[19918]: audit 2026-03-08T23:21:35.376852+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:36 vm04 bash[19918]: audit 2026-03-08T23:21:35.376852+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:36 vm04 bash[19918]: audit 2026-03-08T23:21:35.379662+0000 mon.a (mon.0) 620 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:36 vm04 bash[19918]: audit 2026-03-08T23:21:35.379662+0000 mon.a (mon.0) 620 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:36 vm04 bash[19918]: audit 2026-03-08T23:21:36.379178+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:36 vm04 bash[19918]: audit 2026-03-08T23:21:36.379178+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:36 vm02 bash[17457]: audit 2026-03-08T23:21:35.369711+0000 mon.a (mon.0) 617 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:21:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:36 vm02 bash[17457]: audit 2026-03-08T23:21:35.369711+0000 mon.a (mon.0) 617 : audit [INF] from='osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm10", "root=default"]}]': finished 2026-03-08T23:21:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:36 vm02 bash[17457]: cluster 2026-03-08T23:21:35.372015+0000 mon.a (mon.0) 618 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-08T23:21:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:36 vm02 bash[17457]: cluster 2026-03-08T23:21:35.372015+0000 mon.a (mon.0) 618 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-08T23:21:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:36 vm02 bash[17457]: audit 2026-03-08T23:21:35.376852+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:36 vm02 bash[17457]: audit 2026-03-08T23:21:35.376852+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:36 vm02 bash[17457]: audit 2026-03-08T23:21:35.379662+0000 mon.a (mon.0) 620 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:36 vm02 bash[17457]: audit 2026-03-08T23:21:35.379662+0000 mon.a (mon.0) 620 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:36 vm02 bash[17457]: audit 2026-03-08T23:21:36.379178+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:36 vm02 bash[17457]: audit 2026-03-08T23:21:36.379178+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:37.447 INFO:teuthology.orchestra.run.vm10.stdout:Created osd(s) 7 on host 'vm10' 2026-03-08T23:21:37.531 DEBUG:teuthology.orchestra.run.vm10:osd.7> sudo journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.7.service 2026-03-08T23:21:37.532 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-08T23:21:37.532 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph osd stat -f json 2026-03-08T23:21:37.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: cluster 2026-03-08T23:21:34.801089+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:21:37.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: cluster 2026-03-08T23:21:34.801089+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:21:37.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: cluster 2026-03-08T23:21:34.801131+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:21:37.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: cluster 2026-03-08T23:21:34.801131+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:21:37.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: cluster 2026-03-08T23:21:36.213765+0000 mgr.x (mgr.14150) 234 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:37.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: cluster 2026-03-08T23:21:36.213765+0000 mgr.x (mgr.14150) 234 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:37.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: cluster 2026-03-08T23:21:36.391374+0000 mon.a (mon.0) 622 : cluster [INF] osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497] boot 2026-03-08T23:21:37.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: cluster 2026-03-08T23:21:36.391374+0000 mon.a (mon.0) 622 : cluster [INF] osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497] boot 2026-03-08T23:21:37.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: cluster 2026-03-08T23:21:36.391518+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e47: 8 total, 8 up, 8 in 2026-03-08T23:21:37.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: cluster 2026-03-08T23:21:36.391518+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e47: 8 total, 8 up, 8 in 2026-03-08T23:21:37.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: audit 2026-03-08T23:21:36.392683+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:37.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: audit 2026-03-08T23:21:36.392683+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:37.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: audit 2026-03-08T23:21:36.566151+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: audit 2026-03-08T23:21:36.566151+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: audit 2026-03-08T23:21:36.569801+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: audit 2026-03-08T23:21:36.569801+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: audit 2026-03-08T23:21:36.570509+0000 mon.a (mon.0) 627 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:37.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: audit 2026-03-08T23:21:36.570509+0000 mon.a (mon.0) 627 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:37.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: audit 2026-03-08T23:21:36.570946+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:37.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: audit 2026-03-08T23:21:36.570946+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:37.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: audit 2026-03-08T23:21:36.574089+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:37 vm10 bash[20034]: audit 2026-03-08T23:21:36.574089+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: cluster 2026-03-08T23:21:34.801089+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:21:37.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: cluster 2026-03-08T23:21:34.801089+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: cluster 2026-03-08T23:21:34.801131+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: cluster 2026-03-08T23:21:34.801131+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: cluster 2026-03-08T23:21:36.213765+0000 mgr.x (mgr.14150) 234 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: cluster 2026-03-08T23:21:36.213765+0000 mgr.x (mgr.14150) 234 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: cluster 2026-03-08T23:21:36.391374+0000 mon.a (mon.0) 622 : cluster [INF] osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497] boot 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: cluster 2026-03-08T23:21:36.391374+0000 mon.a (mon.0) 622 : cluster [INF] osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497] boot 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: cluster 2026-03-08T23:21:36.391518+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e47: 8 total, 8 up, 8 in 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: cluster 2026-03-08T23:21:36.391518+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e47: 8 total, 8 up, 8 in 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: audit 2026-03-08T23:21:36.392683+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: audit 2026-03-08T23:21:36.392683+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: audit 2026-03-08T23:21:36.566151+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: audit 2026-03-08T23:21:36.566151+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: audit 2026-03-08T23:21:36.569801+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: audit 2026-03-08T23:21:36.569801+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: audit 2026-03-08T23:21:36.570509+0000 mon.a (mon.0) 627 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: audit 2026-03-08T23:21:36.570509+0000 mon.a (mon.0) 627 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: audit 2026-03-08T23:21:36.570946+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: audit 2026-03-08T23:21:36.570946+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: audit 2026-03-08T23:21:36.574089+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:37 vm04 bash[19918]: audit 2026-03-08T23:21:36.574089+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: cluster 2026-03-08T23:21:34.801089+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: cluster 2026-03-08T23:21:34.801089+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: cluster 2026-03-08T23:21:34.801131+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: cluster 2026-03-08T23:21:34.801131+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: cluster 2026-03-08T23:21:36.213765+0000 mgr.x (mgr.14150) 234 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: cluster 2026-03-08T23:21:36.213765+0000 mgr.x (mgr.14150) 234 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: cluster 2026-03-08T23:21:36.391374+0000 mon.a (mon.0) 622 : cluster [INF] osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497] boot 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: cluster 2026-03-08T23:21:36.391374+0000 mon.a (mon.0) 622 : cluster [INF] osd.7 [v2:192.168.123.110:6816/3535254497,v1:192.168.123.110:6817/3535254497] boot 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: cluster 2026-03-08T23:21:36.391518+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e47: 8 total, 8 up, 8 in 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: cluster 2026-03-08T23:21:36.391518+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e47: 8 total, 8 up, 8 in 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: audit 2026-03-08T23:21:36.392683+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: audit 2026-03-08T23:21:36.392683+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: audit 2026-03-08T23:21:36.566151+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: audit 2026-03-08T23:21:36.566151+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: audit 2026-03-08T23:21:36.569801+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: audit 2026-03-08T23:21:36.569801+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: audit 2026-03-08T23:21:36.570509+0000 mon.a (mon.0) 627 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: audit 2026-03-08T23:21:36.570509+0000 mon.a (mon.0) 627 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: audit 2026-03-08T23:21:36.570946+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: audit 2026-03-08T23:21:36.570946+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: audit 2026-03-08T23:21:36.574089+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:37 vm02 bash[17457]: audit 2026-03-08T23:21:36.574089+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:38.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:38 vm10 bash[20034]: audit 2026-03-08T23:21:37.430309+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:38.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:38 vm10 bash[20034]: audit 2026-03-08T23:21:37.430309+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:38.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:38 vm10 bash[20034]: audit 2026-03-08T23:21:37.435335+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:38.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:38 vm10 bash[20034]: audit 2026-03-08T23:21:37.435335+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:38.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:38 vm10 bash[20034]: audit 2026-03-08T23:21:37.441197+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:38.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:38 vm10 bash[20034]: audit 2026-03-08T23:21:37.441197+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:38.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:38 vm10 bash[20034]: cluster 2026-03-08T23:21:37.578631+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-08T23:21:38.658 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:38 vm10 bash[20034]: cluster 2026-03-08T23:21:37.578631+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-08T23:21:38.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:38 vm04 bash[19918]: audit 2026-03-08T23:21:37.430309+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:38.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:38 vm04 bash[19918]: audit 2026-03-08T23:21:37.430309+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:38.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:38 vm04 bash[19918]: audit 2026-03-08T23:21:37.435335+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:38.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:38 vm04 bash[19918]: audit 2026-03-08T23:21:37.435335+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:38.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:38 vm04 bash[19918]: audit 2026-03-08T23:21:37.441197+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:38.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:38 vm04 bash[19918]: audit 2026-03-08T23:21:37.441197+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:38.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:38 vm04 bash[19918]: cluster 2026-03-08T23:21:37.578631+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-08T23:21:38.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:38 vm04 bash[19918]: cluster 2026-03-08T23:21:37.578631+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-08T23:21:38.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:38 vm02 bash[17457]: audit 2026-03-08T23:21:37.430309+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:38.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:38 vm02 bash[17457]: audit 2026-03-08T23:21:37.430309+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:21:38.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:38 vm02 bash[17457]: audit 2026-03-08T23:21:37.435335+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:38.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:38 vm02 bash[17457]: audit 2026-03-08T23:21:37.435335+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:38.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:38 vm02 bash[17457]: audit 2026-03-08T23:21:37.441197+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:38.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:38 vm02 bash[17457]: audit 2026-03-08T23:21:37.441197+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:38.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:38 vm02 bash[17457]: cluster 2026-03-08T23:21:37.578631+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-08T23:21:38.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:38 vm02 bash[17457]: cluster 2026-03-08T23:21:37.578631+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-08T23:21:39.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:39 vm04 bash[19918]: cluster 2026-03-08T23:21:38.214032+0000 mgr.x (mgr.14150) 235 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:39.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:39 vm04 bash[19918]: cluster 2026-03-08T23:21:38.214032+0000 mgr.x (mgr.14150) 235 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:39 vm02 bash[17457]: cluster 2026-03-08T23:21:38.214032+0000 mgr.x (mgr.14150) 235 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:39 vm02 bash[17457]: cluster 2026-03-08T23:21:38.214032+0000 mgr.x (mgr.14150) 235 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:39 vm10 bash[20034]: cluster 2026-03-08T23:21:38.214032+0000 mgr.x (mgr.14150) 235 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:39 vm10 bash[20034]: cluster 2026-03-08T23:21:38.214032+0000 mgr.x (mgr.14150) 235 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:41.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:41 vm04 bash[19918]: cluster 2026-03-08T23:21:40.214295+0000 mgr.x (mgr.14150) 236 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:41.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:41 vm04 bash[19918]: cluster 2026-03-08T23:21:40.214295+0000 mgr.x (mgr.14150) 236 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:41 vm02 bash[17457]: cluster 2026-03-08T23:21:40.214295+0000 mgr.x (mgr.14150) 236 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:41 vm02 bash[17457]: cluster 2026-03-08T23:21:40.214295+0000 mgr.x (mgr.14150) 236 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:41 vm10 bash[20034]: cluster 2026-03-08T23:21:40.214295+0000 mgr.x (mgr.14150) 236 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:41 vm10 bash[20034]: cluster 2026-03-08T23:21:40.214295+0000 mgr.x (mgr.14150) 236 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:42.150 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:21:42.426 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:21:42.474 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":48,"num_osds":8,"num_up_osds":8,"osd_up_since":1773012096,"num_in_osds":8,"osd_in_since":1773012080,"num_remapped_pgs":0} 2026-03-08T23:21:42.474 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph osd dump --format=json 2026-03-08T23:21:42.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:42 vm04 bash[19918]: audit 2026-03-08T23:21:42.426463+0000 mon.a (mon.0) 634 : audit [DBG] from='client.? 192.168.123.102:0/3814169794' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-08T23:21:42.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:42 vm04 bash[19918]: audit 2026-03-08T23:21:42.426463+0000 mon.a (mon.0) 634 : audit [DBG] from='client.? 192.168.123.102:0/3814169794' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-08T23:21:42.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:42 vm02 bash[17457]: audit 2026-03-08T23:21:42.426463+0000 mon.a (mon.0) 634 : audit [DBG] from='client.? 192.168.123.102:0/3814169794' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-08T23:21:42.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:42 vm02 bash[17457]: audit 2026-03-08T23:21:42.426463+0000 mon.a (mon.0) 634 : audit [DBG] from='client.? 192.168.123.102:0/3814169794' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-08T23:21:42.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:42 vm10 bash[20034]: audit 2026-03-08T23:21:42.426463+0000 mon.a (mon.0) 634 : audit [DBG] from='client.? 192.168.123.102:0/3814169794' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-08T23:21:42.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:42 vm10 bash[20034]: audit 2026-03-08T23:21:42.426463+0000 mon.a (mon.0) 634 : audit [DBG] from='client.? 192.168.123.102:0/3814169794' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: cluster 2026-03-08T23:21:42.214539+0000 mgr.x (mgr.14150) 237 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: cluster 2026-03-08T23:21:42.214539+0000 mgr.x (mgr.14150) 237 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: cephadm 2026-03-08T23:21:42.968162+0000 mgr.x (mgr.14150) 238 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: cephadm 2026-03-08T23:21:42.968162+0000 mgr.x (mgr.14150) 238 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.973863+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.973863+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.978590+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.978590+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.979309+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.979309+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.979868+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.979868+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.980271+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.980271+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: cephadm 2026-03-08T23:21:42.980543+0000 mgr.x (mgr.14150) 239 : cephadm [INF] Adjusting osd_memory_target on vm10 to 1517M 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: cephadm 2026-03-08T23:21:42.980543+0000 mgr.x (mgr.14150) 239 : cephadm [INF] Adjusting osd_memory_target on vm10 to 1517M 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.983835+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.983835+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.985805+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.985805+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.986261+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.986261+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.990318+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:43 vm04 bash[19918]: audit 2026-03-08T23:21:42.990318+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: cluster 2026-03-08T23:21:42.214539+0000 mgr.x (mgr.14150) 237 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: cluster 2026-03-08T23:21:42.214539+0000 mgr.x (mgr.14150) 237 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: cephadm 2026-03-08T23:21:42.968162+0000 mgr.x (mgr.14150) 238 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: cephadm 2026-03-08T23:21:42.968162+0000 mgr.x (mgr.14150) 238 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.973863+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.973863+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.978590+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.978590+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.979309+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.979309+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.979868+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.979868+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.980271+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.980271+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: cephadm 2026-03-08T23:21:42.980543+0000 mgr.x (mgr.14150) 239 : cephadm [INF] Adjusting osd_memory_target on vm10 to 1517M 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: cephadm 2026-03-08T23:21:42.980543+0000 mgr.x (mgr.14150) 239 : cephadm [INF] Adjusting osd_memory_target on vm10 to 1517M 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.983835+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.983835+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.985805+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.985805+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.986261+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.986261+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.990318+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:43 vm02 bash[17457]: audit 2026-03-08T23:21:42.990318+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: cluster 2026-03-08T23:21:42.214539+0000 mgr.x (mgr.14150) 237 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: cluster 2026-03-08T23:21:42.214539+0000 mgr.x (mgr.14150) 237 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: cephadm 2026-03-08T23:21:42.968162+0000 mgr.x (mgr.14150) 238 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: cephadm 2026-03-08T23:21:42.968162+0000 mgr.x (mgr.14150) 238 : cephadm [INF] Detected new or changed devices on vm10 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.973863+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.973863+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.978590+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.978590+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.979309+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.979309+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.979868+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.979868+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.980271+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.980271+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: cephadm 2026-03-08T23:21:42.980543+0000 mgr.x (mgr.14150) 239 : cephadm [INF] Adjusting osd_memory_target on vm10 to 1517M 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: cephadm 2026-03-08T23:21:42.980543+0000 mgr.x (mgr.14150) 239 : cephadm [INF] Adjusting osd_memory_target on vm10 to 1517M 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.983835+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.983835+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.985805+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:44.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.985805+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:21:44.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.986261+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:44.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.986261+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:21:44.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.990318+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:44.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:43 vm10 bash[20034]: audit 2026-03-08T23:21:42.990318+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:21:46.163 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:21:46.236 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:45 vm02 bash[17457]: cluster 2026-03-08T23:21:44.214859+0000 mgr.x (mgr.14150) 240 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:46.236 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:45 vm02 bash[17457]: cluster 2026-03-08T23:21:44.214859+0000 mgr.x (mgr.14150) 240 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:46.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:45 vm04 bash[19918]: cluster 2026-03-08T23:21:44.214859+0000 mgr.x (mgr.14150) 240 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:46.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:45 vm04 bash[19918]: cluster 2026-03-08T23:21:44.214859+0000 mgr.x (mgr.14150) 240 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:46.394 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:21:46.394 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":48,"fsid":"91105a84-1b44-11f1-9a43-e95894f13987","created":"2026-03-08T23:15:53.503597+0000","modified":"2026-03-08T23:21:37.573266+0000","last_up_change":"2026-03-08T23:21:36.385217+0000","last_in_change":"2026-03-08T23:21:20.261534+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-08T23:18:54.227629+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"4f9f4a95-f093-4c3b-af99-6c3664fdf90d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":25,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":706196410},{"type":"v1","addr":"192.168.123.102:6803","nonce":706196410}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":706196410},{"type":"v1","addr":"192.168.123.102:6805","nonce":706196410}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":706196410},{"type":"v1","addr":"192.168.123.102:6809","nonce":706196410}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":706196410},{"type":"v1","addr":"192.168.123.102:6807","nonce":706196410}]},"public_addr":"192.168.123.102:6803/706196410","cluster_addr":"192.168.123.102:6805/706196410","heartbeat_back_addr":"192.168.123.102:6809/706196410","heartbeat_front_addr":"192.168.123.102:6807/706196410","state":["exists","up"]},{"osd":1,"uuid":"329d7c16-85bb-4531-9c68-b1e468e49038","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":2405858986},{"type":"v1","addr":"192.168.123.102:6811","nonce":2405858986}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":2405858986},{"type":"v1","addr":"192.168.123.102:6813","nonce":2405858986}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6816","nonce":2405858986},{"type":"v1","addr":"192.168.123.102:6817","nonce":2405858986}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":2405858986},{"type":"v1","addr":"192.168.123.102:6815","nonce":2405858986}]},"public_addr":"192.168.123.102:6811/2405858986","cluster_addr":"192.168.123.102:6813/2405858986","heartbeat_back_addr":"192.168.123.102:6817/2405858986","heartbeat_front_addr":"192.168.123.102:6815/2405858986","state":["exists","up"]},{"osd":2,"uuid":"5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":19,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6800","nonce":1030884672},{"type":"v1","addr":"192.168.123.104:6801","nonce":1030884672}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":1030884672},{"type":"v1","addr":"192.168.123.104:6803","nonce":1030884672}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":1030884672},{"type":"v1","addr":"192.168.123.104:6807","nonce":1030884672}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":1030884672},{"type":"v1","addr":"192.168.123.104:6805","nonce":1030884672}]},"public_addr":"192.168.123.104:6801/1030884672","cluster_addr":"192.168.123.104:6803/1030884672","heartbeat_back_addr":"192.168.123.104:6807/1030884672","heartbeat_front_addr":"192.168.123.104:6805/1030884672","state":["exists","up"]},{"osd":3,"uuid":"754d7a6e-d6e9-4d53-b18d-fb8dd322dada","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":37,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":953613314},{"type":"v1","addr":"192.168.123.104:6809","nonce":953613314}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":953613314},{"type":"v1","addr":"192.168.123.104:6811","nonce":953613314}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":953613314},{"type":"v1","addr":"192.168.123.104:6815","nonce":953613314}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":953613314},{"type":"v1","addr":"192.168.123.104:6813","nonce":953613314}]},"public_addr":"192.168.123.104:6809/953613314","cluster_addr":"192.168.123.104:6811/953613314","heartbeat_back_addr":"192.168.123.104:6815/953613314","heartbeat_front_addr":"192.168.123.104:6813/953613314","state":["exists","up"]},{"osd":4,"uuid":"bfc224db-b68a-4579-b006-40bea8da3848","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":31,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6816","nonce":3877212940},{"type":"v1","addr":"192.168.123.104:6817","nonce":3877212940}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6818","nonce":3877212940},{"type":"v1","addr":"192.168.123.104:6819","nonce":3877212940}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6822","nonce":3877212940},{"type":"v1","addr":"192.168.123.104:6823","nonce":3877212940}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6820","nonce":3877212940},{"type":"v1","addr":"192.168.123.104:6821","nonce":3877212940}]},"public_addr":"192.168.123.104:6817/3877212940","cluster_addr":"192.168.123.104:6819/3877212940","heartbeat_back_addr":"192.168.123.104:6823/3877212940","heartbeat_front_addr":"192.168.123.104:6821/3877212940","state":["exists","up"]},{"osd":5,"uuid":"b6909095-51a9-4b9d-95f5-1d9f04559ea1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6800","nonce":3075842155},{"type":"v1","addr":"192.168.123.110:6801","nonce":3075842155}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6802","nonce":3075842155},{"type":"v1","addr":"192.168.123.110:6803","nonce":3075842155}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6806","nonce":3075842155},{"type":"v1","addr":"192.168.123.110:6807","nonce":3075842155}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6804","nonce":3075842155},{"type":"v1","addr":"192.168.123.110:6805","nonce":3075842155}]},"public_addr":"192.168.123.110:6801/3075842155","cluster_addr":"192.168.123.110:6803/3075842155","heartbeat_back_addr":"192.168.123.110:6807/3075842155","heartbeat_front_addr":"192.168.123.110:6805/3075842155","state":["exists","up"]},{"osd":6,"uuid":"488a0919-fe60-4b1d-844d-b16c2182536e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":42,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6808","nonce":275518458},{"type":"v1","addr":"192.168.123.110:6809","nonce":275518458}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6810","nonce":275518458},{"type":"v1","addr":"192.168.123.110:6811","nonce":275518458}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6814","nonce":275518458},{"type":"v1","addr":"192.168.123.110:6815","nonce":275518458}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6812","nonce":275518458},{"type":"v1","addr":"192.168.123.110:6813","nonce":275518458}]},"public_addr":"192.168.123.110:6809/275518458","cluster_addr":"192.168.123.110:6811/275518458","heartbeat_back_addr":"192.168.123.110:6815/275518458","heartbeat_front_addr":"192.168.123.110:6813/275518458","state":["exists","up"]},{"osd":7,"uuid":"aef086d3-44c6-4078-a3ac-f3b6f3a98df9","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":47,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6816","nonce":3535254497},{"type":"v1","addr":"192.168.123.110:6817","nonce":3535254497}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6818","nonce":3535254497},{"type":"v1","addr":"192.168.123.110:6819","nonce":3535254497}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6822","nonce":3535254497},{"type":"v1","addr":"192.168.123.110:6823","nonce":3535254497}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6820","nonce":3535254497},{"type":"v1","addr":"192.168.123.110:6821","nonce":3535254497}]},"public_addr":"192.168.123.110:6817/3535254497","cluster_addr":"192.168.123.110:6819/3535254497","heartbeat_back_addr":"192.168.123.110:6823/3535254497","heartbeat_front_addr":"192.168.123.110:6821/3535254497","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:17:45.685153+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:18:20.300742+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:18:52.316976+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:19:26.588768+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:19:59.033736+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:20:29.491958+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:21:01.901565+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:21:34.801133+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.102:6800/3940768142":"2026-03-09T23:16:14.166005+0000","192.168.123.102:0/3277743183":"2026-03-09T23:16:14.166005+0000","192.168.123.102:0/745748274":"2026-03-09T23:16:14.166005+0000","192.168.123.102:0/1072952787":"2026-03-09T23:16:04.370430+0000","192.168.123.102:0/834014324":"2026-03-09T23:16:04.370430+0000","192.168.123.102:0/1753635103":"2026-03-09T23:16:04.370430+0000","192.168.123.102:6801/3940768142":"2026-03-09T23:16:14.166005+0000","192.168.123.102:6801/3600721925":"2026-03-09T23:16:04.370430+0000","192.168.123.102:0/3619041204":"2026-03-09T23:16:14.166005+0000","192.168.123.102:6800/3600721925":"2026-03-09T23:16:04.370430+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-08T23:21:46.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:45 vm10 bash[20034]: cluster 2026-03-08T23:21:44.214859+0000 mgr.x (mgr.14150) 240 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:46.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:45 vm10 bash[20034]: cluster 2026-03-08T23:21:44.214859+0000 mgr.x (mgr.14150) 240 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:46.448 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-08T23:18:54.227629+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '21', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-08T23:21:46.448 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph osd pool get .mgr pg_num 2026-03-08T23:21:47.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:46 vm04 bash[19918]: audit 2026-03-08T23:21:46.393702+0000 mon.a (mon.0) 644 : audit [DBG] from='client.? 192.168.123.102:0/2017196509' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:21:47.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:46 vm04 bash[19918]: audit 2026-03-08T23:21:46.393702+0000 mon.a (mon.0) 644 : audit [DBG] from='client.? 192.168.123.102:0/2017196509' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:21:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:46 vm02 bash[17457]: audit 2026-03-08T23:21:46.393702+0000 mon.a (mon.0) 644 : audit [DBG] from='client.? 192.168.123.102:0/2017196509' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:21:47.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:46 vm02 bash[17457]: audit 2026-03-08T23:21:46.393702+0000 mon.a (mon.0) 644 : audit [DBG] from='client.? 192.168.123.102:0/2017196509' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:21:47.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:46 vm10 bash[20034]: audit 2026-03-08T23:21:46.393702+0000 mon.a (mon.0) 644 : audit [DBG] from='client.? 192.168.123.102:0/2017196509' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:21:47.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:46 vm10 bash[20034]: audit 2026-03-08T23:21:46.393702+0000 mon.a (mon.0) 644 : audit [DBG] from='client.? 192.168.123.102:0/2017196509' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:21:48.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:48 vm04 bash[19918]: cluster 2026-03-08T23:21:46.215115+0000 mgr.x (mgr.14150) 241 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:48.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:48 vm04 bash[19918]: cluster 2026-03-08T23:21:46.215115+0000 mgr.x (mgr.14150) 241 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:48.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:47 vm02 bash[17457]: cluster 2026-03-08T23:21:46.215115+0000 mgr.x (mgr.14150) 241 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:48.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:47 vm02 bash[17457]: cluster 2026-03-08T23:21:46.215115+0000 mgr.x (mgr.14150) 241 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:48.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:48 vm10 bash[20034]: cluster 2026-03-08T23:21:46.215115+0000 mgr.x (mgr.14150) 241 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:48.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:48 vm10 bash[20034]: cluster 2026-03-08T23:21:46.215115+0000 mgr.x (mgr.14150) 241 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:50.176 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:21:50.273 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:50 vm02 bash[17457]: cluster 2026-03-08T23:21:48.215448+0000 mgr.x (mgr.14150) 242 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:50.273 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:50 vm02 bash[17457]: cluster 2026-03-08T23:21:48.215448+0000 mgr.x (mgr.14150) 242 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:50.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:50 vm04 bash[19918]: cluster 2026-03-08T23:21:48.215448+0000 mgr.x (mgr.14150) 242 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:50.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:50 vm04 bash[19918]: cluster 2026-03-08T23:21:48.215448+0000 mgr.x (mgr.14150) 242 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:50.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:50 vm10 bash[20034]: cluster 2026-03-08T23:21:48.215448+0000 mgr.x (mgr.14150) 242 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:50.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:50 vm10 bash[20034]: cluster 2026-03-08T23:21:48.215448+0000 mgr.x (mgr.14150) 242 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:50.435 INFO:teuthology.orchestra.run.vm02.stdout:pg_num: 1 2026-03-08T23:21:50.490 INFO:tasks.cephadm:Adding ceph.iscsi.iscsi.a on vm02 2026-03-08T23:21:50.490 INFO:tasks.cephadm:Adding ceph.iscsi.iscsi.b on vm10 2026-03-08T23:21:50.490 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph osd pool create datapool 3 3 replicated 2026-03-08T23:21:51.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:51 vm04 bash[19918]: audit 2026-03-08T23:21:50.436158+0000 mon.a (mon.0) 645 : audit [DBG] from='client.? 192.168.123.102:0/857326671' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-08T23:21:51.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:51 vm04 bash[19918]: audit 2026-03-08T23:21:50.436158+0000 mon.a (mon.0) 645 : audit [DBG] from='client.? 192.168.123.102:0/857326671' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-08T23:21:51.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:51 vm02 bash[17457]: audit 2026-03-08T23:21:50.436158+0000 mon.a (mon.0) 645 : audit [DBG] from='client.? 192.168.123.102:0/857326671' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-08T23:21:51.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:51 vm02 bash[17457]: audit 2026-03-08T23:21:50.436158+0000 mon.a (mon.0) 645 : audit [DBG] from='client.? 192.168.123.102:0/857326671' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-08T23:21:51.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:51 vm10 bash[20034]: audit 2026-03-08T23:21:50.436158+0000 mon.a (mon.0) 645 : audit [DBG] from='client.? 192.168.123.102:0/857326671' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-08T23:21:51.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:51 vm10 bash[20034]: audit 2026-03-08T23:21:50.436158+0000 mon.a (mon.0) 645 : audit [DBG] from='client.? 192.168.123.102:0/857326671' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-08T23:21:52.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:52 vm04 bash[19918]: cluster 2026-03-08T23:21:50.215847+0000 mgr.x (mgr.14150) 243 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:52.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:52 vm04 bash[19918]: cluster 2026-03-08T23:21:50.215847+0000 mgr.x (mgr.14150) 243 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:52.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:52 vm02 bash[17457]: cluster 2026-03-08T23:21:50.215847+0000 mgr.x (mgr.14150) 243 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:52.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:52 vm02 bash[17457]: cluster 2026-03-08T23:21:50.215847+0000 mgr.x (mgr.14150) 243 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:52.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:52 vm10 bash[20034]: cluster 2026-03-08T23:21:50.215847+0000 mgr.x (mgr.14150) 243 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:52.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:52 vm10 bash[20034]: cluster 2026-03-08T23:21:50.215847+0000 mgr.x (mgr.14150) 243 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:54.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:54 vm04 bash[19918]: cluster 2026-03-08T23:21:52.216152+0000 mgr.x (mgr.14150) 244 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:54.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:54 vm04 bash[19918]: cluster 2026-03-08T23:21:52.216152+0000 mgr.x (mgr.14150) 244 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:54.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:54 vm02 bash[17457]: cluster 2026-03-08T23:21:52.216152+0000 mgr.x (mgr.14150) 244 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:54.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:54 vm02 bash[17457]: cluster 2026-03-08T23:21:52.216152+0000 mgr.x (mgr.14150) 244 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:54.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:54 vm10 bash[20034]: cluster 2026-03-08T23:21:52.216152+0000 mgr.x (mgr.14150) 244 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:54.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:54 vm10 bash[20034]: cluster 2026-03-08T23:21:52.216152+0000 mgr.x (mgr.14150) 244 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:55.109 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.c/config 2026-03-08T23:21:55.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:55 vm04 bash[19918]: cluster 2026-03-08T23:21:54.216425+0000 mgr.x (mgr.14150) 245 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:55.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:55 vm04 bash[19918]: cluster 2026-03-08T23:21:54.216425+0000 mgr.x (mgr.14150) 245 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:55.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:55 vm02 bash[17457]: cluster 2026-03-08T23:21:54.216425+0000 mgr.x (mgr.14150) 245 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:55.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:55 vm02 bash[17457]: cluster 2026-03-08T23:21:54.216425+0000 mgr.x (mgr.14150) 245 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:55.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:55 vm10 bash[20034]: cluster 2026-03-08T23:21:54.216425+0000 mgr.x (mgr.14150) 245 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:55.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:55 vm10 bash[20034]: cluster 2026-03-08T23:21:54.216425+0000 mgr.x (mgr.14150) 245 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:56.395 INFO:teuthology.orchestra.run.vm10.stderr:pool 'datapool' created 2026-03-08T23:21:56.445 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- rbd pool init datapool 2026-03-08T23:21:56.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:56 vm04 bash[19918]: audit 2026-03-08T23:21:55.628458+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.110:0/192821928' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:21:56.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:56 vm04 bash[19918]: audit 2026-03-08T23:21:55.628458+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.110:0/192821928' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:21:56.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:56 vm04 bash[19918]: audit 2026-03-08T23:21:55.628922+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:21:56.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:56 vm04 bash[19918]: audit 2026-03-08T23:21:55.628922+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:21:56.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:56 vm02 bash[17457]: audit 2026-03-08T23:21:55.628458+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.110:0/192821928' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:21:56.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:56 vm02 bash[17457]: audit 2026-03-08T23:21:55.628458+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.110:0/192821928' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:21:56.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:56 vm02 bash[17457]: audit 2026-03-08T23:21:55.628922+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:21:56.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:56 vm02 bash[17457]: audit 2026-03-08T23:21:55.628922+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:21:56.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:56 vm10 bash[20034]: audit 2026-03-08T23:21:55.628458+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.110:0/192821928' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:21:56.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:56 vm10 bash[20034]: audit 2026-03-08T23:21:55.628458+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.110:0/192821928' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:21:56.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:56 vm10 bash[20034]: audit 2026-03-08T23:21:55.628922+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:21:56.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:56 vm10 bash[20034]: audit 2026-03-08T23:21:55.628922+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:21:57.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:57 vm02 bash[17457]: cluster 2026-03-08T23:21:56.216681+0000 mgr.x (mgr.14150) 246 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:57.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:57 vm02 bash[17457]: cluster 2026-03-08T23:21:56.216681+0000 mgr.x (mgr.14150) 246 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:57.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:57 vm02 bash[17457]: audit 2026-03-08T23:21:56.384651+0000 mon.a (mon.0) 647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-08T23:21:57.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:57 vm02 bash[17457]: audit 2026-03-08T23:21:56.384651+0000 mon.a (mon.0) 647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-08T23:21:57.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:57 vm02 bash[17457]: cluster 2026-03-08T23:21:56.389695+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-08T23:21:57.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:57 vm02 bash[17457]: cluster 2026-03-08T23:21:56.389695+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-08T23:21:57.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:57 vm10 bash[20034]: cluster 2026-03-08T23:21:56.216681+0000 mgr.x (mgr.14150) 246 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:57.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:57 vm10 bash[20034]: cluster 2026-03-08T23:21:56.216681+0000 mgr.x (mgr.14150) 246 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:57.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:57 vm10 bash[20034]: audit 2026-03-08T23:21:56.384651+0000 mon.a (mon.0) 647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-08T23:21:57.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:57 vm10 bash[20034]: audit 2026-03-08T23:21:56.384651+0000 mon.a (mon.0) 647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-08T23:21:57.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:57 vm10 bash[20034]: cluster 2026-03-08T23:21:56.389695+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-08T23:21:57.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:57 vm10 bash[20034]: cluster 2026-03-08T23:21:56.389695+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-08T23:21:57.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:57 vm04 bash[19918]: cluster 2026-03-08T23:21:56.216681+0000 mgr.x (mgr.14150) 246 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:57.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:57 vm04 bash[19918]: cluster 2026-03-08T23:21:56.216681+0000 mgr.x (mgr.14150) 246 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:57.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:57 vm04 bash[19918]: audit 2026-03-08T23:21:56.384651+0000 mon.a (mon.0) 647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-08T23:21:57.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:57 vm04 bash[19918]: audit 2026-03-08T23:21:56.384651+0000 mon.a (mon.0) 647 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-08T23:21:57.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:57 vm04 bash[19918]: cluster 2026-03-08T23:21:56.389695+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-08T23:21:57.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:57 vm04 bash[19918]: cluster 2026-03-08T23:21:56.389695+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-08T23:21:58.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:58 vm10 bash[20034]: cluster 2026-03-08T23:21:57.400555+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-08T23:21:58.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:58 vm10 bash[20034]: cluster 2026-03-08T23:21:57.400555+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-08T23:21:58.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:58 vm04 bash[19918]: cluster 2026-03-08T23:21:57.400555+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-08T23:21:58.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:58 vm04 bash[19918]: cluster 2026-03-08T23:21:57.400555+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-08T23:21:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:58 vm02 bash[17457]: cluster 2026-03-08T23:21:57.400555+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-08T23:21:58.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:58 vm02 bash[17457]: cluster 2026-03-08T23:21:57.400555+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-08T23:21:59.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:59 vm10 bash[20034]: cluster 2026-03-08T23:21:58.216930+0000 mgr.x (mgr.14150) 247 : cluster [DBG] pgmap v212: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:59.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:59 vm10 bash[20034]: cluster 2026-03-08T23:21:58.216930+0000 mgr.x (mgr.14150) 247 : cluster [DBG] pgmap v212: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:59.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:59 vm10 bash[20034]: cluster 2026-03-08T23:21:58.408745+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-08T23:21:59.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:21:59 vm10 bash[20034]: cluster 2026-03-08T23:21:58.408745+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-08T23:21:59.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:59 vm04 bash[19918]: cluster 2026-03-08T23:21:58.216930+0000 mgr.x (mgr.14150) 247 : cluster [DBG] pgmap v212: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:59.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:59 vm04 bash[19918]: cluster 2026-03-08T23:21:58.216930+0000 mgr.x (mgr.14150) 247 : cluster [DBG] pgmap v212: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:59.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:59 vm04 bash[19918]: cluster 2026-03-08T23:21:58.408745+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-08T23:21:59.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:21:59 vm04 bash[19918]: cluster 2026-03-08T23:21:58.408745+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-08T23:21:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:59 vm02 bash[17457]: cluster 2026-03-08T23:21:58.216930+0000 mgr.x (mgr.14150) 247 : cluster [DBG] pgmap v212: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:59 vm02 bash[17457]: cluster 2026-03-08T23:21:58.216930+0000 mgr.x (mgr.14150) 247 : cluster [DBG] pgmap v212: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:21:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:59 vm02 bash[17457]: cluster 2026-03-08T23:21:58.408745+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-08T23:21:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:21:59 vm02 bash[17457]: cluster 2026-03-08T23:21:58.408745+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-08T23:22:01.054 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.c/config 2026-03-08T23:22:01.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:01 vm04 bash[19918]: cluster 2026-03-08T23:22:00.217213+0000 mgr.x (mgr.14150) 248 : cluster [DBG] pgmap v214: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:01 vm04 bash[19918]: cluster 2026-03-08T23:22:00.217213+0000 mgr.x (mgr.14150) 248 : cluster [DBG] pgmap v214: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:01 vm04 bash[19918]: audit 2026-03-08T23:22:01.398293+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.110:0/3584906646' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:22:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:01 vm04 bash[19918]: audit 2026-03-08T23:22:01.398293+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.110:0/3584906646' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:22:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:01 vm04 bash[19918]: audit 2026-03-08T23:22:01.398716+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:22:01.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:01 vm04 bash[19918]: audit 2026-03-08T23:22:01.398716+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:22:01.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:01 vm02 bash[17457]: cluster 2026-03-08T23:22:00.217213+0000 mgr.x (mgr.14150) 248 : cluster [DBG] pgmap v214: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:01.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:01 vm02 bash[17457]: cluster 2026-03-08T23:22:00.217213+0000 mgr.x (mgr.14150) 248 : cluster [DBG] pgmap v214: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:01.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:01 vm02 bash[17457]: audit 2026-03-08T23:22:01.398293+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.110:0/3584906646' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:22:01.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:01 vm02 bash[17457]: audit 2026-03-08T23:22:01.398293+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.110:0/3584906646' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:22:01.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:01 vm02 bash[17457]: audit 2026-03-08T23:22:01.398716+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:22:01.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:01 vm02 bash[17457]: audit 2026-03-08T23:22:01.398716+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:22:01.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:01 vm10 bash[20034]: cluster 2026-03-08T23:22:00.217213+0000 mgr.x (mgr.14150) 248 : cluster [DBG] pgmap v214: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:01.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:01 vm10 bash[20034]: cluster 2026-03-08T23:22:00.217213+0000 mgr.x (mgr.14150) 248 : cluster [DBG] pgmap v214: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:01.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:01 vm10 bash[20034]: audit 2026-03-08T23:22:01.398293+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.110:0/3584906646' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:22:01.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:01 vm10 bash[20034]: audit 2026-03-08T23:22:01.398293+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.110:0/3584906646' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:22:01.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:01 vm10 bash[20034]: audit 2026-03-08T23:22:01.398716+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:22:01.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:01 vm10 bash[20034]: audit 2026-03-08T23:22:01.398716+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:22:02.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:02 vm04 bash[19918]: audit 2026-03-08T23:22:01.467606+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-08T23:22:02.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:02 vm04 bash[19918]: audit 2026-03-08T23:22:01.467606+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-08T23:22:02.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:02 vm04 bash[19918]: cluster 2026-03-08T23:22:01.470809+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-08T23:22:02.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:02 vm04 bash[19918]: cluster 2026-03-08T23:22:01.470809+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-08T23:22:02.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:02 vm02 bash[17457]: audit 2026-03-08T23:22:01.467606+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-08T23:22:02.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:02 vm02 bash[17457]: audit 2026-03-08T23:22:01.467606+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-08T23:22:02.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:02 vm02 bash[17457]: cluster 2026-03-08T23:22:01.470809+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-08T23:22:02.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:02 vm02 bash[17457]: cluster 2026-03-08T23:22:01.470809+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-08T23:22:02.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:02 vm10 bash[20034]: audit 2026-03-08T23:22:01.467606+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-08T23:22:02.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:02 vm10 bash[20034]: audit 2026-03-08T23:22:01.467606+0000 mon.a (mon.0) 652 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-08T23:22:02.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:02 vm10 bash[20034]: cluster 2026-03-08T23:22:01.470809+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-08T23:22:02.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:02 vm10 bash[20034]: cluster 2026-03-08T23:22:01.470809+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-08T23:22:03.551 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph orch apply iscsi datapool admin admin --trusted_ip_list 192.168.123.102,192.168.123.110 --placement '2;vm02=iscsi.a;vm10=iscsi.b' 2026-03-08T23:22:03.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:03 vm04 bash[19918]: cluster 2026-03-08T23:22:02.217469+0000 mgr.x (mgr.14150) 249 : cluster [DBG] pgmap v216: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:03.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:03 vm04 bash[19918]: cluster 2026-03-08T23:22:02.217469+0000 mgr.x (mgr.14150) 249 : cluster [DBG] pgmap v216: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:03.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:03 vm04 bash[19918]: cluster 2026-03-08T23:22:02.499706+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-08T23:22:03.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:03 vm04 bash[19918]: cluster 2026-03-08T23:22:02.499706+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-08T23:22:03.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:03 vm02 bash[17457]: cluster 2026-03-08T23:22:02.217469+0000 mgr.x (mgr.14150) 249 : cluster [DBG] pgmap v216: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:03.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:03 vm02 bash[17457]: cluster 2026-03-08T23:22:02.217469+0000 mgr.x (mgr.14150) 249 : cluster [DBG] pgmap v216: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:03.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:03 vm02 bash[17457]: cluster 2026-03-08T23:22:02.499706+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-08T23:22:03.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:03 vm02 bash[17457]: cluster 2026-03-08T23:22:02.499706+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-08T23:22:03.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:03 vm10 bash[20034]: cluster 2026-03-08T23:22:02.217469+0000 mgr.x (mgr.14150) 249 : cluster [DBG] pgmap v216: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:03.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:03 vm10 bash[20034]: cluster 2026-03-08T23:22:02.217469+0000 mgr.x (mgr.14150) 249 : cluster [DBG] pgmap v216: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:03.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:03 vm10 bash[20034]: cluster 2026-03-08T23:22:02.499706+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-08T23:22:03.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:03 vm10 bash[20034]: cluster 2026-03-08T23:22:02.499706+0000 mon.a (mon.0) 654 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-08T23:22:04.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:04 vm04 bash[19918]: cluster 2026-03-08T23:22:03.495828+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-08T23:22:04.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:04 vm04 bash[19918]: cluster 2026-03-08T23:22:03.495828+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-08T23:22:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:04 vm02 bash[17457]: cluster 2026-03-08T23:22:03.495828+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-08T23:22:04.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:04 vm02 bash[17457]: cluster 2026-03-08T23:22:03.495828+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-08T23:22:04.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:04 vm10 bash[20034]: cluster 2026-03-08T23:22:03.495828+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-08T23:22:04.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:04 vm10 bash[20034]: cluster 2026-03-08T23:22:03.495828+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-08T23:22:05.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:05 vm04 bash[19918]: cluster 2026-03-08T23:22:04.217724+0000 mgr.x (mgr.14150) 250 : cluster [DBG] pgmap v219: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:05.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:05 vm04 bash[19918]: cluster 2026-03-08T23:22:04.217724+0000 mgr.x (mgr.14150) 250 : cluster [DBG] pgmap v219: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:05.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:05 vm02 bash[17457]: cluster 2026-03-08T23:22:04.217724+0000 mgr.x (mgr.14150) 250 : cluster [DBG] pgmap v219: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:05.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:05 vm02 bash[17457]: cluster 2026-03-08T23:22:04.217724+0000 mgr.x (mgr.14150) 250 : cluster [DBG] pgmap v219: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:05.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:05 vm10 bash[20034]: cluster 2026-03-08T23:22:04.217724+0000 mgr.x (mgr.14150) 250 : cluster [DBG] pgmap v219: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:05.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:05 vm10 bash[20034]: cluster 2026-03-08T23:22:04.217724+0000 mgr.x (mgr.14150) 250 : cluster [DBG] pgmap v219: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:07.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:07 vm04 bash[19918]: cluster 2026-03-08T23:22:06.218042+0000 mgr.x (mgr.14150) 251 : cluster [DBG] pgmap v220: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 341 B/s wr, 0 op/s 2026-03-08T23:22:07.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:07 vm04 bash[19918]: cluster 2026-03-08T23:22:06.218042+0000 mgr.x (mgr.14150) 251 : cluster [DBG] pgmap v220: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 341 B/s wr, 0 op/s 2026-03-08T23:22:07.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:07 vm02 bash[17457]: cluster 2026-03-08T23:22:06.218042+0000 mgr.x (mgr.14150) 251 : cluster [DBG] pgmap v220: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 341 B/s wr, 0 op/s 2026-03-08T23:22:07.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:07 vm02 bash[17457]: cluster 2026-03-08T23:22:06.218042+0000 mgr.x (mgr.14150) 251 : cluster [DBG] pgmap v220: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 341 B/s wr, 0 op/s 2026-03-08T23:22:07.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:07 vm10 bash[20034]: cluster 2026-03-08T23:22:06.218042+0000 mgr.x (mgr.14150) 251 : cluster [DBG] pgmap v220: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 341 B/s wr, 0 op/s 2026-03-08T23:22:07.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:07 vm10 bash[20034]: cluster 2026-03-08T23:22:06.218042+0000 mgr.x (mgr.14150) 251 : cluster [DBG] pgmap v220: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 341 B/s wr, 0 op/s 2026-03-08T23:22:08.164 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.c/config 2026-03-08T23:22:08.443 INFO:teuthology.orchestra.run.vm10.stdout:Scheduled iscsi.datapool update... 2026-03-08T23:22:08.532 INFO:tasks.cephadm:Distributing iscsi-gateway.cfg... 2026-03-08T23:22:08.532 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-08T23:22:08.532 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-08T23:22:08.539 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-08T23:22:08.539 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-08T23:22:08.547 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-08T23:22:08.547 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-08T23:22:08.555 DEBUG:teuthology.orchestra.run.vm02:iscsi.iscsi.a> sudo journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@iscsi.iscsi.a.service 2026-03-08T23:22:08.582 DEBUG:teuthology.orchestra.run.vm10:iscsi.iscsi.b> sudo journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@iscsi.iscsi.b.service 2026-03-08T23:22:08.598 INFO:tasks.cephadm:Setting up client nodes... 2026-03-08T23:22:08.599 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-08T23:22:09.605 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:22:09 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:22:09.606 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:22:09 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:22:09.606 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:22:09 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:22:09.606 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:22:09 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: cluster 2026-03-08T23:22:08.218328+0000 mgr.x (mgr.14150) 252 : cluster [DBG] pgmap v221: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 303 B/s wr, 0 op/s 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: cluster 2026-03-08T23:22:08.218328+0000 mgr.x (mgr.14150) 252 : cluster [DBG] pgmap v221: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 303 B/s wr, 0 op/s 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.431451+0000 mgr.x (mgr.14150) 253 : audit [DBG] from='client.24319 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.102,192.168.123.110", "placement": "2;vm02=iscsi.a;vm10=iscsi.b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.431451+0000 mgr.x (mgr.14150) 253 : audit [DBG] from='client.24319 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.102,192.168.123.110", "placement": "2;vm02=iscsi.a;vm10=iscsi.b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: cephadm 2026-03-08T23:22:08.432542+0000 mgr.x (mgr.14150) 254 : cephadm [INF] Saving service iscsi.datapool spec with placement vm02=iscsi.a;vm10=iscsi.b;count:2 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: cephadm 2026-03-08T23:22:08.432542+0000 mgr.x (mgr.14150) 254 : cephadm [INF] Saving service iscsi.datapool spec with placement vm02=iscsi.a;vm10=iscsi.b;count:2 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.442469+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.442469+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.443400+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.443400+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.783728+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.783728+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.784201+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.784201+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.788814+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.788814+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.790469+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.790469+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.792507+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.792507+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.797237+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: audit 2026-03-08T23:22:08.797237+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: cephadm 2026-03-08T23:22:08.797822+0000 mgr.x (mgr.14150) 255 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm02 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 bash[17457]: cephadm 2026-03-08T23:22:08.797822+0000 mgr.x (mgr.14150) 255 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm02 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:09 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:22:09 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:22:09.606 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:22:09 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: cluster 2026-03-08T23:22:08.218328+0000 mgr.x (mgr.14150) 252 : cluster [DBG] pgmap v221: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 303 B/s wr, 0 op/s 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: cluster 2026-03-08T23:22:08.218328+0000 mgr.x (mgr.14150) 252 : cluster [DBG] pgmap v221: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 303 B/s wr, 0 op/s 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.431451+0000 mgr.x (mgr.14150) 253 : audit [DBG] from='client.24319 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.102,192.168.123.110", "placement": "2;vm02=iscsi.a;vm10=iscsi.b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.431451+0000 mgr.x (mgr.14150) 253 : audit [DBG] from='client.24319 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.102,192.168.123.110", "placement": "2;vm02=iscsi.a;vm10=iscsi.b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: cephadm 2026-03-08T23:22:08.432542+0000 mgr.x (mgr.14150) 254 : cephadm [INF] Saving service iscsi.datapool spec with placement vm02=iscsi.a;vm10=iscsi.b;count:2 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: cephadm 2026-03-08T23:22:08.432542+0000 mgr.x (mgr.14150) 254 : cephadm [INF] Saving service iscsi.datapool spec with placement vm02=iscsi.a;vm10=iscsi.b;count:2 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.442469+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.442469+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.443400+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.443400+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.783728+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.783728+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.784201+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.784201+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.788814+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.788814+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.790469+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.790469+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.792507+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.792507+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.797237+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: audit 2026-03-08T23:22:08.797237+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: cephadm 2026-03-08T23:22:08.797822+0000 mgr.x (mgr.14150) 255 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm02 2026-03-08T23:22:09.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:09 vm04 bash[19918]: cephadm 2026-03-08T23:22:08.797822+0000 mgr.x (mgr.14150) 255 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm02 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: cluster 2026-03-08T23:22:08.218328+0000 mgr.x (mgr.14150) 252 : cluster [DBG] pgmap v221: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 303 B/s wr, 0 op/s 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: cluster 2026-03-08T23:22:08.218328+0000 mgr.x (mgr.14150) 252 : cluster [DBG] pgmap v221: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 303 B/s wr, 0 op/s 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.431451+0000 mgr.x (mgr.14150) 253 : audit [DBG] from='client.24319 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.102,192.168.123.110", "placement": "2;vm02=iscsi.a;vm10=iscsi.b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.431451+0000 mgr.x (mgr.14150) 253 : audit [DBG] from='client.24319 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.102,192.168.123.110", "placement": "2;vm02=iscsi.a;vm10=iscsi.b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: cephadm 2026-03-08T23:22:08.432542+0000 mgr.x (mgr.14150) 254 : cephadm [INF] Saving service iscsi.datapool spec with placement vm02=iscsi.a;vm10=iscsi.b;count:2 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: cephadm 2026-03-08T23:22:08.432542+0000 mgr.x (mgr.14150) 254 : cephadm [INF] Saving service iscsi.datapool spec with placement vm02=iscsi.a;vm10=iscsi.b;count:2 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.442469+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.442469+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.443400+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.443400+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.783728+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.783728+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.784201+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.784201+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.788814+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.788814+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.790469+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.790469+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.792507+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.792507+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.797237+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: audit 2026-03-08T23:22:08.797237+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: cephadm 2026-03-08T23:22:08.797822+0000 mgr.x (mgr.14150) 255 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm02 2026-03-08T23:22:09.905 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:09 vm10 bash[20034]: cephadm 2026-03-08T23:22:08.797822+0000 mgr.x (mgr.14150) 255 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm02 2026-03-08T23:22:10.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[37757]: debug Removing blocklisted entry for this host : 192.168.123.102:6800/3940768142 2026-03-08T23:22:10.436 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:22:10.437 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:22:10 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:22:10.437 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:22:10 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:22:10.437 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:22:10 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:22:10.701 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:22:10 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:22:10.701 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:22:10 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:22:10.701 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:22:10.701 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: cluster 2026-03-08T23:22:09.522440+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-08T23:22:10.701 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: cluster 2026-03-08T23:22:09.522440+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-08T23:22:10.701 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:09.639455+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:09.639455+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:09.649836+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:09.649836+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:09.659374+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:09.659374+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:09.661002+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:09.661002+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:09.665524+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:09.665524+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:09.668054+0000 mon.a (mon.0) 670 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:09.668054+0000 mon.a (mon.0) 670 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: cephadm 2026-03-08T23:22:09.668587+0000 mgr.x (mgr.14150) 256 : cephadm [INF] Deploying daemon iscsi.iscsi.b on vm10 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: cephadm 2026-03-08T23:22:09.668587+0000 mgr.x (mgr.14150) 256 : cephadm [INF] Deploying daemon iscsi.iscsi.b on vm10 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:10.094792+0000 mon.a (mon.0) 671 : audit [DBG] from='client.? 192.168.123.102:0/1557763332' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:10.094792+0000 mon.a (mon.0) 671 : audit [DBG] from='client.? 192.168.123.102:0/1557763332' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:10.268738+0000 mon.c (mon.1) 17 : audit [INF] from='client.? 192.168.123.102:0/2614490396' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]: dispatch 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:10.268738+0000 mon.c (mon.1) 17 : audit [INF] from='client.? 192.168.123.102:0/2614490396' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]: dispatch 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:10.269377+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]: dispatch 2026-03-08T23:22:10.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:10 vm10 bash[20034]: audit 2026-03-08T23:22:10.269377+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]: dispatch 2026-03-08T23:22:10.702 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:22:10 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: cluster 2026-03-08T23:22:09.522440+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: cluster 2026-03-08T23:22:09.522440+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:09.639455+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:09.639455+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:09.649836+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:09.649836+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:09.659374+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:09.659374+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:09.661002+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:09.661002+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:09.665524+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:09.665524+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:09.668054+0000 mon.a (mon.0) 670 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:09.668054+0000 mon.a (mon.0) 670 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: cephadm 2026-03-08T23:22:09.668587+0000 mgr.x (mgr.14150) 256 : cephadm [INF] Deploying daemon iscsi.iscsi.b on vm10 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: cephadm 2026-03-08T23:22:09.668587+0000 mgr.x (mgr.14150) 256 : cephadm [INF] Deploying daemon iscsi.iscsi.b on vm10 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:10.094792+0000 mon.a (mon.0) 671 : audit [DBG] from='client.? 192.168.123.102:0/1557763332' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:10.094792+0000 mon.a (mon.0) 671 : audit [DBG] from='client.? 192.168.123.102:0/1557763332' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:10.268738+0000 mon.c (mon.1) 17 : audit [INF] from='client.? 192.168.123.102:0/2614490396' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]: dispatch 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:10.268738+0000 mon.c (mon.1) 17 : audit [INF] from='client.? 192.168.123.102:0/2614490396' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]: dispatch 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:10.269377+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]: dispatch 2026-03-08T23:22:10.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:10 vm04 bash[19918]: audit 2026-03-08T23:22:10.269377+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]: dispatch 2026-03-08T23:22:10.894 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[37757]: debug Successfully removed blocklist entry 2026-03-08T23:22:10.894 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[37757]: debug Removing blocklisted entry for this host : 192.168.123.102:0/3277743183 2026-03-08T23:22:10.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: cluster 2026-03-08T23:22:09.522440+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-08T23:22:10.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: cluster 2026-03-08T23:22:09.522440+0000 mon.a (mon.0) 664 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-08T23:22:10.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:09.639455+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:09.639455+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:09.649836+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:09.649836+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:09.659374+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:09.659374+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:10.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:09.661002+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:22:10.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:09.661002+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:22:10.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:09.665524+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:22:10.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:09.665524+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.b", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:22:10.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:09.668054+0000 mon.a (mon.0) 670 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:10.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:09.668054+0000 mon.a (mon.0) 670 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:10.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: cephadm 2026-03-08T23:22:09.668587+0000 mgr.x (mgr.14150) 256 : cephadm [INF] Deploying daemon iscsi.iscsi.b on vm10 2026-03-08T23:22:10.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: cephadm 2026-03-08T23:22:09.668587+0000 mgr.x (mgr.14150) 256 : cephadm [INF] Deploying daemon iscsi.iscsi.b on vm10 2026-03-08T23:22:10.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:10.094792+0000 mon.a (mon.0) 671 : audit [DBG] from='client.? 192.168.123.102:0/1557763332' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:22:10.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:10.094792+0000 mon.a (mon.0) 671 : audit [DBG] from='client.? 192.168.123.102:0/1557763332' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:22:10.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:10.268738+0000 mon.c (mon.1) 17 : audit [INF] from='client.? 192.168.123.102:0/2614490396' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]: dispatch 2026-03-08T23:22:10.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:10.268738+0000 mon.c (mon.1) 17 : audit [INF] from='client.? 192.168.123.102:0/2614490396' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]: dispatch 2026-03-08T23:22:10.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:10.269377+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]: dispatch 2026-03-08T23:22:10.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:10 vm02 bash[17457]: audit 2026-03-08T23:22:10.269377+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]: dispatch 2026-03-08T23:22:11.032 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:10 vm10 bash[39354]: debug Processing osd blocklist entries for this node 2026-03-08T23:22:11.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:11 vm10 bash[39354]: debug Reading the configuration object to update local LIO configuration 2026-03-08T23:22:11.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:11 vm10 bash[39354]: debug Configuration does not have an entry for this host(vm10.local) - nothing to define to LIO 2026-03-08T23:22:11.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:11 vm10 bash[39354]: * Serving Flask app 'rbd-target-api' (lazy loading) 2026-03-08T23:22:11.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:11 vm10 bash[39354]: * Environment: production 2026-03-08T23:22:11.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:11 vm10 bash[39354]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-08T23:22:11.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:11 vm10 bash[39354]: Use a production WSGI server instead. 2026-03-08T23:22:11.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:11 vm10 bash[39354]: * Debug mode: off 2026-03-08T23:22:11.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:11 vm10 bash[39354]: debug * Running on all addresses. 2026-03-08T23:22:11.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:11 vm10 bash[39354]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-08T23:22:11.408 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:11 vm10 bash[39354]: * Running on all addresses. 2026-03-08T23:22:11.408 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:11 vm10 bash[39354]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-08T23:22:11.408 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:11 vm10 bash[39354]: debug * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-08T23:22:11.408 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:11 vm10 bash[39354]: * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-08T23:22:11.827 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: cluster 2026-03-08T23:22:10.218603+0000 mgr.x (mgr.14150) 257 : cluster [DBG] pgmap v223: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 265 B/s wr, 0 op/s 2026-03-08T23:22:11.827 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: cluster 2026-03-08T23:22:10.218603+0000 mgr.x (mgr.14150) 257 : cluster [DBG] pgmap v223: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 265 B/s wr, 0 op/s 2026-03-08T23:22:11.827 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:10.549305+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:10.549305+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:10.554206+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:10.554206+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:10.558261+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:10.558261+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: cephadm 2026-03-08T23:22:10.558988+0000 mgr.x (mgr.14150) 258 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: cephadm 2026-03-08T23:22:10.558988+0000 mgr.x (mgr.14150) 258 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:10.568417+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:10.568417+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:10.584560+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:10.584560+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:10.672325+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]': finished 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:10.672325+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]': finished 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: cluster 2026-03-08T23:22:10.674301+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: cluster 2026-03-08T23:22:10.674301+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:10.871993+0000 mon.c (mon.1) 18 : audit [INF] from='client.? 192.168.123.102:0/449633005' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]: dispatch 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:10.871993+0000 mon.c (mon.1) 18 : audit [INF] from='client.? 192.168.123.102:0/449633005' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]: dispatch 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:10.872460+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]: dispatch 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:10.872460+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]: dispatch 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:11.023695+0000 mon.a (mon.0) 681 : audit [DBG] from='client.? 192.168.123.110:0/1765099393' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:22:11.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[17457]: audit 2026-03-08T23:22:11.023695+0000 mon.a (mon.0) 681 : audit [DBG] from='client.? 192.168.123.110:0/1765099393' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:22:11.828 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[37757]: debug Successfully removed blocklist entry 2026-03-08T23:22:11.828 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:11 vm02 bash[37757]: debug Removing blocklisted entry for this host : 192.168.123.102:0/745748274 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: cluster 2026-03-08T23:22:10.218603+0000 mgr.x (mgr.14150) 257 : cluster [DBG] pgmap v223: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 265 B/s wr, 0 op/s 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: cluster 2026-03-08T23:22:10.218603+0000 mgr.x (mgr.14150) 257 : cluster [DBG] pgmap v223: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 265 B/s wr, 0 op/s 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:10.549305+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:10.549305+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:10.554206+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:10.554206+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:10.558261+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:10.558261+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: cephadm 2026-03-08T23:22:10.558988+0000 mgr.x (mgr.14150) 258 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: cephadm 2026-03-08T23:22:10.558988+0000 mgr.x (mgr.14150) 258 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:10.568417+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:10.568417+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:10.584560+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:10.584560+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:10.672325+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]': finished 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:10.672325+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]': finished 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: cluster 2026-03-08T23:22:10.674301+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: cluster 2026-03-08T23:22:10.674301+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:10.871993+0000 mon.c (mon.1) 18 : audit [INF] from='client.? 192.168.123.102:0/449633005' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]: dispatch 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:10.871993+0000 mon.c (mon.1) 18 : audit [INF] from='client.? 192.168.123.102:0/449633005' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]: dispatch 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:10.872460+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]: dispatch 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:10.872460+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]: dispatch 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:11.023695+0000 mon.a (mon.0) 681 : audit [DBG] from='client.? 192.168.123.110:0/1765099393' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:22:11.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:11 vm04 bash[19918]: audit 2026-03-08T23:22:11.023695+0000 mon.a (mon.0) 681 : audit [DBG] from='client.? 192.168.123.110:0/1765099393' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:22:11.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: cluster 2026-03-08T23:22:10.218603+0000 mgr.x (mgr.14150) 257 : cluster [DBG] pgmap v223: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 265 B/s wr, 0 op/s 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: cluster 2026-03-08T23:22:10.218603+0000 mgr.x (mgr.14150) 257 : cluster [DBG] pgmap v223: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 265 B/s wr, 0 op/s 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:10.549305+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:10.549305+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:10.554206+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:10.554206+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:10.558261+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:10.558261+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: cephadm 2026-03-08T23:22:10.558988+0000 mgr.x (mgr.14150) 258 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: cephadm 2026-03-08T23:22:10.558988+0000 mgr.x (mgr.14150) 258 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:10.568417+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:10.568417+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:10.584560+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:10.584560+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:10.672325+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]': finished 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:10.672325+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3940768142"}]': finished 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: cluster 2026-03-08T23:22:10.674301+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: cluster 2026-03-08T23:22:10.674301+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:10.871993+0000 mon.c (mon.1) 18 : audit [INF] from='client.? 192.168.123.102:0/449633005' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]: dispatch 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:10.871993+0000 mon.c (mon.1) 18 : audit [INF] from='client.? 192.168.123.102:0/449633005' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]: dispatch 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:10.872460+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]: dispatch 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:10.872460+0000 mon.a (mon.0) 680 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]: dispatch 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:11.023695+0000 mon.a (mon.0) 681 : audit [DBG] from='client.? 192.168.123.110:0/1765099393' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:22:11.908 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:11 vm10 bash[20034]: audit 2026-03-08T23:22:11.023695+0000 mon.a (mon.0) 681 : audit [DBG] from='client.? 192.168.123.110:0/1765099393' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:22:13.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:12 vm04 bash[19918]: audit 2026-03-08T23:22:11.675771+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]': finished 2026-03-08T23:22:13.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:12 vm04 bash[19918]: audit 2026-03-08T23:22:11.675771+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]': finished 2026-03-08T23:22:13.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:12 vm04 bash[19918]: cluster 2026-03-08T23:22:11.678576+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-08T23:22:13.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:12 vm04 bash[19918]: cluster 2026-03-08T23:22:11.678576+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-08T23:22:13.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:12 vm04 bash[19918]: audit 2026-03-08T23:22:11.893202+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.102:0/2600983298' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/745748274"}]: dispatch 2026-03-08T23:22:13.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:12 vm04 bash[19918]: audit 2026-03-08T23:22:11.893202+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.102:0/2600983298' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/745748274"}]: dispatch 2026-03-08T23:22:13.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:12 vm02 bash[17457]: audit 2026-03-08T23:22:11.675771+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]': finished 2026-03-08T23:22:13.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:12 vm02 bash[17457]: audit 2026-03-08T23:22:11.675771+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]': finished 2026-03-08T23:22:13.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:12 vm02 bash[17457]: cluster 2026-03-08T23:22:11.678576+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-08T23:22:13.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:12 vm02 bash[17457]: cluster 2026-03-08T23:22:11.678576+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-08T23:22:13.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:12 vm02 bash[17457]: audit 2026-03-08T23:22:11.893202+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.102:0/2600983298' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/745748274"}]: dispatch 2026-03-08T23:22:13.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:12 vm02 bash[17457]: audit 2026-03-08T23:22:11.893202+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.102:0/2600983298' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/745748274"}]: dispatch 2026-03-08T23:22:13.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:12 vm02 bash[37757]: debug Successfully removed blocklist entry 2026-03-08T23:22:13.145 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:12 vm02 bash[37757]: debug Removing blocklisted entry for this host : 192.168.123.102:0/1072952787 2026-03-08T23:22:13.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:12 vm10 bash[20034]: audit 2026-03-08T23:22:11.675771+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]': finished 2026-03-08T23:22:13.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:12 vm10 bash[20034]: audit 2026-03-08T23:22:11.675771+0000 mon.a (mon.0) 682 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3277743183"}]': finished 2026-03-08T23:22:13.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:12 vm10 bash[20034]: cluster 2026-03-08T23:22:11.678576+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-08T23:22:13.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:12 vm10 bash[20034]: cluster 2026-03-08T23:22:11.678576+0000 mon.a (mon.0) 683 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-08T23:22:13.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:12 vm10 bash[20034]: audit 2026-03-08T23:22:11.893202+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.102:0/2600983298' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/745748274"}]: dispatch 2026-03-08T23:22:13.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:12 vm10 bash[20034]: audit 2026-03-08T23:22:11.893202+0000 mon.a (mon.0) 684 : audit [INF] from='client.? 192.168.123.102:0/2600983298' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/745748274"}]: dispatch 2026-03-08T23:22:14.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:13 vm04 bash[19918]: cluster 2026-03-08T23:22:12.218979+0000 mgr.x (mgr.14150) 259 : cluster [DBG] pgmap v226: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:14.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:13 vm04 bash[19918]: cluster 2026-03-08T23:22:12.218979+0000 mgr.x (mgr.14150) 259 : cluster [DBG] pgmap v226: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:14.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:13 vm04 bash[19918]: audit 2026-03-08T23:22:12.684235+0000 mon.a (mon.0) 685 : audit [INF] from='client.? 192.168.123.102:0/2600983298' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/745748274"}]': finished 2026-03-08T23:22:14.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:13 vm04 bash[19918]: audit 2026-03-08T23:22:12.684235+0000 mon.a (mon.0) 685 : audit [INF] from='client.? 192.168.123.102:0/2600983298' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/745748274"}]': finished 2026-03-08T23:22:14.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:13 vm04 bash[19918]: cluster 2026-03-08T23:22:12.697202+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-08T23:22:14.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:13 vm04 bash[19918]: cluster 2026-03-08T23:22:12.697202+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-08T23:22:14.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:13 vm04 bash[19918]: cluster 2026-03-08T23:22:12.697649+0000 mon.a (mon.0) 687 : cluster [DBG] mgrmap e15: x(active, since 5m) 2026-03-08T23:22:14.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:13 vm04 bash[19918]: cluster 2026-03-08T23:22:12.697649+0000 mon.a (mon.0) 687 : cluster [DBG] mgrmap e15: x(active, since 5m) 2026-03-08T23:22:14.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:13 vm04 bash[19918]: audit 2026-03-08T23:22:12.854856+0000 mon.a (mon.0) 688 : audit [INF] from='client.? 192.168.123.102:0/43094933' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1072952787"}]: dispatch 2026-03-08T23:22:14.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:13 vm04 bash[19918]: audit 2026-03-08T23:22:12.854856+0000 mon.a (mon.0) 688 : audit [INF] from='client.? 192.168.123.102:0/43094933' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1072952787"}]: dispatch 2026-03-08T23:22:14.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:13 vm02 bash[17457]: cluster 2026-03-08T23:22:12.218979+0000 mgr.x (mgr.14150) 259 : cluster [DBG] pgmap v226: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:14.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:13 vm02 bash[17457]: cluster 2026-03-08T23:22:12.218979+0000 mgr.x (mgr.14150) 259 : cluster [DBG] pgmap v226: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:14.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:13 vm02 bash[17457]: audit 2026-03-08T23:22:12.684235+0000 mon.a (mon.0) 685 : audit [INF] from='client.? 192.168.123.102:0/2600983298' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/745748274"}]': finished 2026-03-08T23:22:14.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:13 vm02 bash[17457]: audit 2026-03-08T23:22:12.684235+0000 mon.a (mon.0) 685 : audit [INF] from='client.? 192.168.123.102:0/2600983298' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/745748274"}]': finished 2026-03-08T23:22:14.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:13 vm02 bash[17457]: cluster 2026-03-08T23:22:12.697202+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-08T23:22:14.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:13 vm02 bash[17457]: cluster 2026-03-08T23:22:12.697202+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-08T23:22:14.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:13 vm02 bash[17457]: cluster 2026-03-08T23:22:12.697649+0000 mon.a (mon.0) 687 : cluster [DBG] mgrmap e15: x(active, since 5m) 2026-03-08T23:22:14.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:13 vm02 bash[17457]: cluster 2026-03-08T23:22:12.697649+0000 mon.a (mon.0) 687 : cluster [DBG] mgrmap e15: x(active, since 5m) 2026-03-08T23:22:14.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:13 vm02 bash[17457]: audit 2026-03-08T23:22:12.854856+0000 mon.a (mon.0) 688 : audit [INF] from='client.? 192.168.123.102:0/43094933' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1072952787"}]: dispatch 2026-03-08T23:22:14.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:13 vm02 bash[17457]: audit 2026-03-08T23:22:12.854856+0000 mon.a (mon.0) 688 : audit [INF] from='client.? 192.168.123.102:0/43094933' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1072952787"}]: dispatch 2026-03-08T23:22:14.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:13 vm02 bash[37757]: debug Successfully removed blocklist entry 2026-03-08T23:22:14.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:13 vm02 bash[37757]: debug Removing blocklisted entry for this host : 192.168.123.102:0/834014324 2026-03-08T23:22:14.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:13 vm10 bash[20034]: cluster 2026-03-08T23:22:12.218979+0000 mgr.x (mgr.14150) 259 : cluster [DBG] pgmap v226: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:14.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:13 vm10 bash[20034]: cluster 2026-03-08T23:22:12.218979+0000 mgr.x (mgr.14150) 259 : cluster [DBG] pgmap v226: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:14.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:13 vm10 bash[20034]: audit 2026-03-08T23:22:12.684235+0000 mon.a (mon.0) 685 : audit [INF] from='client.? 192.168.123.102:0/2600983298' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/745748274"}]': finished 2026-03-08T23:22:14.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:13 vm10 bash[20034]: audit 2026-03-08T23:22:12.684235+0000 mon.a (mon.0) 685 : audit [INF] from='client.? 192.168.123.102:0/2600983298' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/745748274"}]': finished 2026-03-08T23:22:14.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:13 vm10 bash[20034]: cluster 2026-03-08T23:22:12.697202+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-08T23:22:14.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:13 vm10 bash[20034]: cluster 2026-03-08T23:22:12.697202+0000 mon.a (mon.0) 686 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-08T23:22:14.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:13 vm10 bash[20034]: cluster 2026-03-08T23:22:12.697649+0000 mon.a (mon.0) 687 : cluster [DBG] mgrmap e15: x(active, since 5m) 2026-03-08T23:22:14.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:13 vm10 bash[20034]: cluster 2026-03-08T23:22:12.697649+0000 mon.a (mon.0) 687 : cluster [DBG] mgrmap e15: x(active, since 5m) 2026-03-08T23:22:14.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:13 vm10 bash[20034]: audit 2026-03-08T23:22:12.854856+0000 mon.a (mon.0) 688 : audit [INF] from='client.? 192.168.123.102:0/43094933' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1072952787"}]: dispatch 2026-03-08T23:22:14.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:13 vm10 bash[20034]: audit 2026-03-08T23:22:12.854856+0000 mon.a (mon.0) 688 : audit [INF] from='client.? 192.168.123.102:0/43094933' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1072952787"}]: dispatch 2026-03-08T23:22:14.274 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:14.610 INFO:teuthology.orchestra.run.vm02.stdout:[client.0] 2026-03-08T23:22:14.610 INFO:teuthology.orchestra.run.vm02.stdout: key = AQCmBK5px8sdJBAA3YnVFXMtl5cEHZzQrDk9lg== 2026-03-08T23:22:14.661 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-08T23:22:14.661 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-08T23:22:14.661 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-08T23:22:14.674 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-08T23:22:14.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:14 vm02 bash[17457]: audit 2026-03-08T23:22:13.695917+0000 mon.a (mon.0) 689 : audit [INF] from='client.? 192.168.123.102:0/43094933' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1072952787"}]': finished 2026-03-08T23:22:14.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:14 vm02 bash[17457]: audit 2026-03-08T23:22:13.695917+0000 mon.a (mon.0) 689 : audit [INF] from='client.? 192.168.123.102:0/43094933' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1072952787"}]': finished 2026-03-08T23:22:14.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:14 vm02 bash[17457]: cluster 2026-03-08T23:22:13.700545+0000 mon.a (mon.0) 690 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-08T23:22:14.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:14 vm02 bash[17457]: cluster 2026-03-08T23:22:13.700545+0000 mon.a (mon.0) 690 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-08T23:22:14.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:14 vm02 bash[17457]: audit 2026-03-08T23:22:13.872145+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.102:0/3041114790' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/834014324"}]: dispatch 2026-03-08T23:22:14.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:14 vm02 bash[17457]: audit 2026-03-08T23:22:13.872145+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.102:0/3041114790' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/834014324"}]: dispatch 2026-03-08T23:22:14.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:14 vm02 bash[17457]: audit 2026-03-08T23:22:14.604638+0000 mon.b (mon.2) 17 : audit [INF] from='client.? 192.168.123.102:0/2590544390' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:14.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:14 vm02 bash[17457]: audit 2026-03-08T23:22:14.604638+0000 mon.b (mon.2) 17 : audit [INF] from='client.? 192.168.123.102:0/2590544390' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:14.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:14 vm02 bash[17457]: audit 2026-03-08T23:22:14.605773+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:14.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:14 vm02 bash[17457]: audit 2026-03-08T23:22:14.605773+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:14.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:14 vm02 bash[17457]: audit 2026-03-08T23:22:14.609040+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:14.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:14 vm02 bash[17457]: audit 2026-03-08T23:22:14.609040+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:14.894 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:14 vm02 bash[37757]: debug Successfully removed blocklist entry 2026-03-08T23:22:14.894 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:14 vm02 bash[37757]: debug Removing blocklisted entry for this host : 192.168.123.102:0/1753635103 2026-03-08T23:22:15.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:14 vm04 bash[19918]: audit 2026-03-08T23:22:13.695917+0000 mon.a (mon.0) 689 : audit [INF] from='client.? 192.168.123.102:0/43094933' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1072952787"}]': finished 2026-03-08T23:22:15.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:14 vm04 bash[19918]: audit 2026-03-08T23:22:13.695917+0000 mon.a (mon.0) 689 : audit [INF] from='client.? 192.168.123.102:0/43094933' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1072952787"}]': finished 2026-03-08T23:22:15.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:14 vm04 bash[19918]: cluster 2026-03-08T23:22:13.700545+0000 mon.a (mon.0) 690 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-08T23:22:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:14 vm04 bash[19918]: cluster 2026-03-08T23:22:13.700545+0000 mon.a (mon.0) 690 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-08T23:22:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:14 vm04 bash[19918]: audit 2026-03-08T23:22:13.872145+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.102:0/3041114790' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/834014324"}]: dispatch 2026-03-08T23:22:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:14 vm04 bash[19918]: audit 2026-03-08T23:22:13.872145+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.102:0/3041114790' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/834014324"}]: dispatch 2026-03-08T23:22:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:14 vm04 bash[19918]: audit 2026-03-08T23:22:14.604638+0000 mon.b (mon.2) 17 : audit [INF] from='client.? 192.168.123.102:0/2590544390' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:14 vm04 bash[19918]: audit 2026-03-08T23:22:14.604638+0000 mon.b (mon.2) 17 : audit [INF] from='client.? 192.168.123.102:0/2590544390' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:14 vm04 bash[19918]: audit 2026-03-08T23:22:14.605773+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:14 vm04 bash[19918]: audit 2026-03-08T23:22:14.605773+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:14 vm04 bash[19918]: audit 2026-03-08T23:22:14.609040+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:15.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:14 vm04 bash[19918]: audit 2026-03-08T23:22:14.609040+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:15.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:14 vm10 bash[20034]: audit 2026-03-08T23:22:13.695917+0000 mon.a (mon.0) 689 : audit [INF] from='client.? 192.168.123.102:0/43094933' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1072952787"}]': finished 2026-03-08T23:22:15.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:14 vm10 bash[20034]: audit 2026-03-08T23:22:13.695917+0000 mon.a (mon.0) 689 : audit [INF] from='client.? 192.168.123.102:0/43094933' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1072952787"}]': finished 2026-03-08T23:22:15.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:14 vm10 bash[20034]: cluster 2026-03-08T23:22:13.700545+0000 mon.a (mon.0) 690 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-08T23:22:15.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:14 vm10 bash[20034]: cluster 2026-03-08T23:22:13.700545+0000 mon.a (mon.0) 690 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-08T23:22:15.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:14 vm10 bash[20034]: audit 2026-03-08T23:22:13.872145+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.102:0/3041114790' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/834014324"}]: dispatch 2026-03-08T23:22:15.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:14 vm10 bash[20034]: audit 2026-03-08T23:22:13.872145+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.102:0/3041114790' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/834014324"}]: dispatch 2026-03-08T23:22:15.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:14 vm10 bash[20034]: audit 2026-03-08T23:22:14.604638+0000 mon.b (mon.2) 17 : audit [INF] from='client.? 192.168.123.102:0/2590544390' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:15.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:14 vm10 bash[20034]: audit 2026-03-08T23:22:14.604638+0000 mon.b (mon.2) 17 : audit [INF] from='client.? 192.168.123.102:0/2590544390' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:15.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:14 vm10 bash[20034]: audit 2026-03-08T23:22:14.605773+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:15.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:14 vm10 bash[20034]: audit 2026-03-08T23:22:14.605773+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:15.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:14 vm10 bash[20034]: audit 2026-03-08T23:22:14.609040+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:15.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:14 vm10 bash[20034]: audit 2026-03-08T23:22:14.609040+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:16.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: cluster 2026-03-08T23:22:14.219286+0000 mgr.x (mgr.14150) 260 : cluster [DBG] pgmap v229: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: cluster 2026-03-08T23:22:14.219286+0000 mgr.x (mgr.14150) 260 : cluster [DBG] pgmap v229: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:14.716057+0000 mon.a (mon.0) 694 : audit [INF] from='client.? 192.168.123.102:0/3041114790' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/834014324"}]': finished 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:14.716057+0000 mon.a (mon.0) 694 : audit [INF] from='client.? 192.168.123.102:0/3041114790' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/834014324"}]': finished 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: cluster 2026-03-08T23:22:14.720563+0000 mon.a (mon.0) 695 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: cluster 2026-03-08T23:22:14.720563+0000 mon.a (mon.0) 695 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:14.886355+0000 mon.a (mon.0) 696 : audit [INF] from='client.? 192.168.123.102:0/495136420' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1753635103"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:14.886355+0000 mon.a (mon.0) 696 : audit [INF] from='client.? 192.168.123.102:0/495136420' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1753635103"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:14.900284+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:14.900284+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.513982+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.513982+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.518330+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.518330+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.633715+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.633715+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.637495+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.637495+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.638292+0000 mon.a (mon.0) 702 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.638292+0000 mon.a (mon.0) 702 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.639212+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.639212+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.643537+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.643537+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.660734+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.660734+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.662431+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.662431+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.666585+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.666585+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.673176+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.673176+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.677382+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.677382+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.678827+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm10"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.678827+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm10"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.683890+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.683890+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.685440+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.685440+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.686641+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.686641+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.687313+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.687313+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.691570+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:15 vm04 bash[19918]: audit 2026-03-08T23:22:15.691570+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: cluster 2026-03-08T23:22:14.219286+0000 mgr.x (mgr.14150) 260 : cluster [DBG] pgmap v229: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: cluster 2026-03-08T23:22:14.219286+0000 mgr.x (mgr.14150) 260 : cluster [DBG] pgmap v229: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:14.716057+0000 mon.a (mon.0) 694 : audit [INF] from='client.? 192.168.123.102:0/3041114790' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/834014324"}]': finished 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:14.716057+0000 mon.a (mon.0) 694 : audit [INF] from='client.? 192.168.123.102:0/3041114790' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/834014324"}]': finished 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: cluster 2026-03-08T23:22:14.720563+0000 mon.a (mon.0) 695 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: cluster 2026-03-08T23:22:14.720563+0000 mon.a (mon.0) 695 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:14.886355+0000 mon.a (mon.0) 696 : audit [INF] from='client.? 192.168.123.102:0/495136420' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1753635103"}]: dispatch 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:14.886355+0000 mon.a (mon.0) 696 : audit [INF] from='client.? 192.168.123.102:0/495136420' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1753635103"}]: dispatch 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:14.900284+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:14.900284+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.513982+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.513982+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.518330+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.518330+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.633715+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.633715+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.637495+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.637495+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.638292+0000 mon.a (mon.0) 702 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.638292+0000 mon.a (mon.0) 702 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.639212+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.639212+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.643537+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.643537+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.660734+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.660734+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.662431+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.662431+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.666585+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.666585+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.673176+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.673176+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-08T23:22:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.677382+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.677382+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.678827+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm10"}]: dispatch 2026-03-08T23:22:16.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.678827+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm10"}]: dispatch 2026-03-08T23:22:16.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.683890+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.683890+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.685440+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:16.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.685440+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:16.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.686641+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:16.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.686641+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:16.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.687313+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:16.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.687313+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:16.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.691570+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[17457]: audit 2026-03-08T23:22:15.691570+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.145 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[37757]: debug Successfully removed blocklist entry 2026-03-08T23:22:16.145 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:15 vm02 bash[37757]: debug Removing blocklisted entry for this host : 192.168.123.102:6801/3940768142 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: cluster 2026-03-08T23:22:14.219286+0000 mgr.x (mgr.14150) 260 : cluster [DBG] pgmap v229: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: cluster 2026-03-08T23:22:14.219286+0000 mgr.x (mgr.14150) 260 : cluster [DBG] pgmap v229: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:14.716057+0000 mon.a (mon.0) 694 : audit [INF] from='client.? 192.168.123.102:0/3041114790' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/834014324"}]': finished 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:14.716057+0000 mon.a (mon.0) 694 : audit [INF] from='client.? 192.168.123.102:0/3041114790' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/834014324"}]': finished 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: cluster 2026-03-08T23:22:14.720563+0000 mon.a (mon.0) 695 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: cluster 2026-03-08T23:22:14.720563+0000 mon.a (mon.0) 695 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:14.886355+0000 mon.a (mon.0) 696 : audit [INF] from='client.? 192.168.123.102:0/495136420' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1753635103"}]: dispatch 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:14.886355+0000 mon.a (mon.0) 696 : audit [INF] from='client.? 192.168.123.102:0/495136420' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1753635103"}]: dispatch 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:14.900284+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:14.900284+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.513982+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.513982+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.518330+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.518330+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.633715+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.633715+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.637495+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.637495+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.638292+0000 mon.a (mon.0) 702 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.638292+0000 mon.a (mon.0) 702 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.639212+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.639212+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.643537+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.643537+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.660734+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.660734+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.662431+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.662431+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.666585+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.666585+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.673176+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.673176+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.677382+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.677382+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.678827+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm10"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.678827+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm10"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.683890+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.683890+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.685440+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.685440+0000 mon.a (mon.0) 712 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.686641+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.686641+0000 mon.a (mon.0) 713 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.687313+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.687313+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.691570+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:16.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:15 vm10 bash[20034]: audit 2026-03-08T23:22:15.691570+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:22:17.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: audit 2026-03-08T23:22:15.661204+0000 mgr.x (mgr.14150) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: audit 2026-03-08T23:22:15.661204+0000 mgr.x (mgr.14150) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: cephadm 2026-03-08T23:22:15.662103+0000 mgr.x (mgr.14150) 262 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: cephadm 2026-03-08T23:22:15.662103+0000 mgr.x (mgr.14150) 262 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: cephadm 2026-03-08T23:22:15.662188+0000 mgr.x (mgr.14150) 263 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.110:5000 to Dashboard 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: cephadm 2026-03-08T23:22:15.662188+0000 mgr.x (mgr.14150) 263 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.110:5000 to Dashboard 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: audit 2026-03-08T23:22:15.662860+0000 mgr.x (mgr.14150) 264 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: audit 2026-03-08T23:22:15.662860+0000 mgr.x (mgr.14150) 264 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: audit 2026-03-08T23:22:15.673716+0000 mgr.x (mgr.14150) 265 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: audit 2026-03-08T23:22:15.673716+0000 mgr.x (mgr.14150) 265 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: audit 2026-03-08T23:22:15.679232+0000 mgr.x (mgr.14150) 266 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm10"}]: dispatch 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: audit 2026-03-08T23:22:15.679232+0000 mgr.x (mgr.14150) 266 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm10"}]: dispatch 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: audit 2026-03-08T23:22:15.750042+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.102:0/495136420' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1753635103"}]': finished 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: audit 2026-03-08T23:22:15.750042+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.102:0/495136420' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1753635103"}]': finished 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: cluster 2026-03-08T23:22:15.758476+0000 mon.a (mon.0) 717 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: cluster 2026-03-08T23:22:15.758476+0000 mon.a (mon.0) 717 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: audit 2026-03-08T23:22:15.918108+0000 mon.a (mon.0) 718 : audit [INF] from='client.? 192.168.123.102:0/3173756225' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3940768142"}]: dispatch 2026-03-08T23:22:17.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:16 vm04 bash[19918]: audit 2026-03-08T23:22:15.918108+0000 mon.a (mon.0) 718 : audit [INF] from='client.? 192.168.123.102:0/3173756225' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3940768142"}]: dispatch 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: audit 2026-03-08T23:22:15.661204+0000 mgr.x (mgr.14150) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: audit 2026-03-08T23:22:15.661204+0000 mgr.x (mgr.14150) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: cephadm 2026-03-08T23:22:15.662103+0000 mgr.x (mgr.14150) 262 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: cephadm 2026-03-08T23:22:15.662103+0000 mgr.x (mgr.14150) 262 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: cephadm 2026-03-08T23:22:15.662188+0000 mgr.x (mgr.14150) 263 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.110:5000 to Dashboard 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: cephadm 2026-03-08T23:22:15.662188+0000 mgr.x (mgr.14150) 263 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.110:5000 to Dashboard 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: audit 2026-03-08T23:22:15.662860+0000 mgr.x (mgr.14150) 264 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: audit 2026-03-08T23:22:15.662860+0000 mgr.x (mgr.14150) 264 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: audit 2026-03-08T23:22:15.673716+0000 mgr.x (mgr.14150) 265 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: audit 2026-03-08T23:22:15.673716+0000 mgr.x (mgr.14150) 265 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: audit 2026-03-08T23:22:15.679232+0000 mgr.x (mgr.14150) 266 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm10"}]: dispatch 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: audit 2026-03-08T23:22:15.679232+0000 mgr.x (mgr.14150) 266 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm10"}]: dispatch 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: audit 2026-03-08T23:22:15.750042+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.102:0/495136420' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1753635103"}]': finished 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: audit 2026-03-08T23:22:15.750042+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.102:0/495136420' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1753635103"}]': finished 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: cluster 2026-03-08T23:22:15.758476+0000 mon.a (mon.0) 717 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: cluster 2026-03-08T23:22:15.758476+0000 mon.a (mon.0) 717 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: audit 2026-03-08T23:22:15.918108+0000 mon.a (mon.0) 718 : audit [INF] from='client.? 192.168.123.102:0/3173756225' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3940768142"}]: dispatch 2026-03-08T23:22:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[17457]: audit 2026-03-08T23:22:15.918108+0000 mon.a (mon.0) 718 : audit [INF] from='client.? 192.168.123.102:0/3173756225' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3940768142"}]: dispatch 2026-03-08T23:22:17.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[37757]: debug Successfully removed blocklist entry 2026-03-08T23:22:17.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:16 vm02 bash[37757]: debug Removing blocklisted entry for this host : 192.168.123.102:6801/3600721925 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: audit 2026-03-08T23:22:15.661204+0000 mgr.x (mgr.14150) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: audit 2026-03-08T23:22:15.661204+0000 mgr.x (mgr.14150) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: cephadm 2026-03-08T23:22:15.662103+0000 mgr.x (mgr.14150) 262 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: cephadm 2026-03-08T23:22:15.662103+0000 mgr.x (mgr.14150) 262 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: cephadm 2026-03-08T23:22:15.662188+0000 mgr.x (mgr.14150) 263 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.110:5000 to Dashboard 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: cephadm 2026-03-08T23:22:15.662188+0000 mgr.x (mgr.14150) 263 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.110:5000 to Dashboard 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: audit 2026-03-08T23:22:15.662860+0000 mgr.x (mgr.14150) 264 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: audit 2026-03-08T23:22:15.662860+0000 mgr.x (mgr.14150) 264 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: audit 2026-03-08T23:22:15.673716+0000 mgr.x (mgr.14150) 265 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: audit 2026-03-08T23:22:15.673716+0000 mgr.x (mgr.14150) 265 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: audit 2026-03-08T23:22:15.679232+0000 mgr.x (mgr.14150) 266 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm10"}]: dispatch 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: audit 2026-03-08T23:22:15.679232+0000 mgr.x (mgr.14150) 266 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm10"}]: dispatch 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: audit 2026-03-08T23:22:15.750042+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.102:0/495136420' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1753635103"}]': finished 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: audit 2026-03-08T23:22:15.750042+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.102:0/495136420' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1753635103"}]': finished 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: cluster 2026-03-08T23:22:15.758476+0000 mon.a (mon.0) 717 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: cluster 2026-03-08T23:22:15.758476+0000 mon.a (mon.0) 717 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: audit 2026-03-08T23:22:15.918108+0000 mon.a (mon.0) 718 : audit [INF] from='client.? 192.168.123.102:0/3173756225' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3940768142"}]: dispatch 2026-03-08T23:22:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:16 vm10 bash[20034]: audit 2026-03-08T23:22:15.918108+0000 mon.a (mon.0) 718 : audit [INF] from='client.? 192.168.123.102:0/3173756225' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3940768142"}]: dispatch 2026-03-08T23:22:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:17 vm04 bash[19918]: cluster 2026-03-08T23:22:16.219585+0000 mgr.x (mgr.14150) 267 : cluster [DBG] pgmap v232: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-08T23:22:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:17 vm04 bash[19918]: cluster 2026-03-08T23:22:16.219585+0000 mgr.x (mgr.14150) 267 : cluster [DBG] pgmap v232: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-08T23:22:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:17 vm04 bash[19918]: audit 2026-03-08T23:22:16.771653+0000 mon.a (mon.0) 719 : audit [INF] from='client.? 192.168.123.102:0/3173756225' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3940768142"}]': finished 2026-03-08T23:22:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:17 vm04 bash[19918]: audit 2026-03-08T23:22:16.771653+0000 mon.a (mon.0) 719 : audit [INF] from='client.? 192.168.123.102:0/3173756225' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3940768142"}]': finished 2026-03-08T23:22:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:17 vm04 bash[19918]: cluster 2026-03-08T23:22:16.789437+0000 mon.a (mon.0) 720 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-08T23:22:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:17 vm04 bash[19918]: cluster 2026-03-08T23:22:16.789437+0000 mon.a (mon.0) 720 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-08T23:22:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:17 vm04 bash[19918]: audit 2026-03-08T23:22:16.943609+0000 mon.a (mon.0) 721 : audit [INF] from='client.? 192.168.123.102:0/2012616447' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3600721925"}]: dispatch 2026-03-08T23:22:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:17 vm04 bash[19918]: audit 2026-03-08T23:22:16.943609+0000 mon.a (mon.0) 721 : audit [INF] from='client.? 192.168.123.102:0/2012616447' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3600721925"}]: dispatch 2026-03-08T23:22:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:17 vm02 bash[17457]: cluster 2026-03-08T23:22:16.219585+0000 mgr.x (mgr.14150) 267 : cluster [DBG] pgmap v232: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-08T23:22:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:17 vm02 bash[17457]: cluster 2026-03-08T23:22:16.219585+0000 mgr.x (mgr.14150) 267 : cluster [DBG] pgmap v232: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-08T23:22:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:17 vm02 bash[17457]: audit 2026-03-08T23:22:16.771653+0000 mon.a (mon.0) 719 : audit [INF] from='client.? 192.168.123.102:0/3173756225' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3940768142"}]': finished 2026-03-08T23:22:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:17 vm02 bash[17457]: audit 2026-03-08T23:22:16.771653+0000 mon.a (mon.0) 719 : audit [INF] from='client.? 192.168.123.102:0/3173756225' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3940768142"}]': finished 2026-03-08T23:22:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:17 vm02 bash[17457]: cluster 2026-03-08T23:22:16.789437+0000 mon.a (mon.0) 720 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-08T23:22:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:17 vm02 bash[17457]: cluster 2026-03-08T23:22:16.789437+0000 mon.a (mon.0) 720 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-08T23:22:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:17 vm02 bash[17457]: audit 2026-03-08T23:22:16.943609+0000 mon.a (mon.0) 721 : audit [INF] from='client.? 192.168.123.102:0/2012616447' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3600721925"}]: dispatch 2026-03-08T23:22:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:17 vm02 bash[17457]: audit 2026-03-08T23:22:16.943609+0000 mon.a (mon.0) 721 : audit [INF] from='client.? 192.168.123.102:0/2012616447' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3600721925"}]: dispatch 2026-03-08T23:22:18.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:17 vm02 bash[37757]: debug Successfully removed blocklist entry 2026-03-08T23:22:18.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:17 vm02 bash[37757]: debug Removing blocklisted entry for this host : 192.168.123.102:0/3619041204 2026-03-08T23:22:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:17 vm10 bash[20034]: cluster 2026-03-08T23:22:16.219585+0000 mgr.x (mgr.14150) 267 : cluster [DBG] pgmap v232: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-08T23:22:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:17 vm10 bash[20034]: cluster 2026-03-08T23:22:16.219585+0000 mgr.x (mgr.14150) 267 : cluster [DBG] pgmap v232: 4 pgs: 4 active+clean; 449 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-08T23:22:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:17 vm10 bash[20034]: audit 2026-03-08T23:22:16.771653+0000 mon.a (mon.0) 719 : audit [INF] from='client.? 192.168.123.102:0/3173756225' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3940768142"}]': finished 2026-03-08T23:22:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:17 vm10 bash[20034]: audit 2026-03-08T23:22:16.771653+0000 mon.a (mon.0) 719 : audit [INF] from='client.? 192.168.123.102:0/3173756225' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3940768142"}]': finished 2026-03-08T23:22:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:17 vm10 bash[20034]: cluster 2026-03-08T23:22:16.789437+0000 mon.a (mon.0) 720 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-08T23:22:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:17 vm10 bash[20034]: cluster 2026-03-08T23:22:16.789437+0000 mon.a (mon.0) 720 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-08T23:22:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:17 vm10 bash[20034]: audit 2026-03-08T23:22:16.943609+0000 mon.a (mon.0) 721 : audit [INF] from='client.? 192.168.123.102:0/2012616447' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3600721925"}]: dispatch 2026-03-08T23:22:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:17 vm10 bash[20034]: audit 2026-03-08T23:22:16.943609+0000 mon.a (mon.0) 721 : audit [INF] from='client.? 192.168.123.102:0/2012616447' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3600721925"}]: dispatch 2026-03-08T23:22:19.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:18 vm04 bash[19918]: audit 2026-03-08T23:22:17.783387+0000 mon.a (mon.0) 722 : audit [INF] from='client.? 192.168.123.102:0/2012616447' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3600721925"}]': finished 2026-03-08T23:22:19.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:18 vm04 bash[19918]: audit 2026-03-08T23:22:17.783387+0000 mon.a (mon.0) 722 : audit [INF] from='client.? 192.168.123.102:0/2012616447' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3600721925"}]': finished 2026-03-08T23:22:19.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:18 vm04 bash[19918]: cluster 2026-03-08T23:22:17.784830+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-08T23:22:19.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:18 vm04 bash[19918]: cluster 2026-03-08T23:22:17.784830+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-08T23:22:19.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:18 vm04 bash[19918]: audit 2026-03-08T23:22:17.957816+0000 mon.b (mon.2) 18 : audit [INF] from='client.? 192.168.123.102:0/941245694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]: dispatch 2026-03-08T23:22:19.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:18 vm04 bash[19918]: audit 2026-03-08T23:22:17.957816+0000 mon.b (mon.2) 18 : audit [INF] from='client.? 192.168.123.102:0/941245694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]: dispatch 2026-03-08T23:22:19.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:18 vm04 bash[19918]: audit 2026-03-08T23:22:17.958349+0000 mon.a (mon.0) 724 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]: dispatch 2026-03-08T23:22:19.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:18 vm04 bash[19918]: audit 2026-03-08T23:22:17.958349+0000 mon.a (mon.0) 724 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]: dispatch 2026-03-08T23:22:19.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:18 vm02 bash[17457]: audit 2026-03-08T23:22:17.783387+0000 mon.a (mon.0) 722 : audit [INF] from='client.? 192.168.123.102:0/2012616447' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3600721925"}]': finished 2026-03-08T23:22:19.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:18 vm02 bash[17457]: audit 2026-03-08T23:22:17.783387+0000 mon.a (mon.0) 722 : audit [INF] from='client.? 192.168.123.102:0/2012616447' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3600721925"}]': finished 2026-03-08T23:22:19.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:18 vm02 bash[17457]: cluster 2026-03-08T23:22:17.784830+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-08T23:22:19.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:18 vm02 bash[17457]: cluster 2026-03-08T23:22:17.784830+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-08T23:22:19.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:18 vm02 bash[17457]: audit 2026-03-08T23:22:17.957816+0000 mon.b (mon.2) 18 : audit [INF] from='client.? 192.168.123.102:0/941245694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]: dispatch 2026-03-08T23:22:19.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:18 vm02 bash[17457]: audit 2026-03-08T23:22:17.957816+0000 mon.b (mon.2) 18 : audit [INF] from='client.? 192.168.123.102:0/941245694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]: dispatch 2026-03-08T23:22:19.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:18 vm02 bash[17457]: audit 2026-03-08T23:22:17.958349+0000 mon.a (mon.0) 724 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]: dispatch 2026-03-08T23:22:19.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:18 vm02 bash[17457]: audit 2026-03-08T23:22:17.958349+0000 mon.a (mon.0) 724 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]: dispatch 2026-03-08T23:22:19.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:18 vm02 bash[37757]: debug Successfully removed blocklist entry 2026-03-08T23:22:19.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:18 vm02 bash[37757]: debug Removing blocklisted entry for this host : 192.168.123.102:6800/3600721925 2026-03-08T23:22:19.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:18 vm10 bash[20034]: audit 2026-03-08T23:22:17.783387+0000 mon.a (mon.0) 722 : audit [INF] from='client.? 192.168.123.102:0/2012616447' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3600721925"}]': finished 2026-03-08T23:22:19.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:18 vm10 bash[20034]: audit 2026-03-08T23:22:17.783387+0000 mon.a (mon.0) 722 : audit [INF] from='client.? 192.168.123.102:0/2012616447' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3600721925"}]': finished 2026-03-08T23:22:19.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:18 vm10 bash[20034]: cluster 2026-03-08T23:22:17.784830+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-08T23:22:19.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:18 vm10 bash[20034]: cluster 2026-03-08T23:22:17.784830+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-08T23:22:19.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:18 vm10 bash[20034]: audit 2026-03-08T23:22:17.957816+0000 mon.b (mon.2) 18 : audit [INF] from='client.? 192.168.123.102:0/941245694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]: dispatch 2026-03-08T23:22:19.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:18 vm10 bash[20034]: audit 2026-03-08T23:22:17.957816+0000 mon.b (mon.2) 18 : audit [INF] from='client.? 192.168.123.102:0/941245694' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]: dispatch 2026-03-08T23:22:19.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:18 vm10 bash[20034]: audit 2026-03-08T23:22:17.958349+0000 mon.a (mon.0) 724 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]: dispatch 2026-03-08T23:22:19.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:18 vm10 bash[20034]: audit 2026-03-08T23:22:17.958349+0000 mon.a (mon.0) 724 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]: dispatch 2026-03-08T23:22:19.300 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.b/config 2026-03-08T23:22:19.592 INFO:teuthology.orchestra.run.vm04.stdout:[client.1] 2026-03-08T23:22:19.592 INFO:teuthology.orchestra.run.vm04.stdout: key = AQCrBK5pG7gbIxAAaDGH0t/dV0Q+yMmNaBAUjw== 2026-03-08T23:22:19.642 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-08T23:22:19.642 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-08T23:22:19.642 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-08T23:22:19.653 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph auth get-or-create client.2 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-08T23:22:19.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:19 vm04 bash[19918]: cluster 2026-03-08T23:22:18.219818+0000 mgr.x (mgr.14150) 268 : cluster [DBG] pgmap v235: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-08T23:22:19.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:19 vm04 bash[19918]: cluster 2026-03-08T23:22:18.219818+0000 mgr.x (mgr.14150) 268 : cluster [DBG] pgmap v235: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-08T23:22:19.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:19 vm04 bash[19918]: audit 2026-03-08T23:22:18.790748+0000 mon.a (mon.0) 725 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]': finished 2026-03-08T23:22:19.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:19 vm04 bash[19918]: audit 2026-03-08T23:22:18.790748+0000 mon.a (mon.0) 725 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]': finished 2026-03-08T23:22:19.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:19 vm04 bash[19918]: cluster 2026-03-08T23:22:18.795040+0000 mon.a (mon.0) 726 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-08T23:22:19.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:19 vm04 bash[19918]: cluster 2026-03-08T23:22:18.795040+0000 mon.a (mon.0) 726 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-08T23:22:19.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:19 vm04 bash[19918]: audit 2026-03-08T23:22:18.962923+0000 mon.a (mon.0) 727 : audit [INF] from='client.? 192.168.123.102:0/632811907' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3600721925"}]: dispatch 2026-03-08T23:22:19.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:19 vm04 bash[19918]: audit 2026-03-08T23:22:18.962923+0000 mon.a (mon.0) 727 : audit [INF] from='client.? 192.168.123.102:0/632811907' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3600721925"}]: dispatch 2026-03-08T23:22:19.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:19 vm04 bash[19918]: audit 2026-03-08T23:22:19.588545+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.104:0/3792508978' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:19.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:19 vm04 bash[19918]: audit 2026-03-08T23:22:19.588545+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.104:0/3792508978' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:19.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:19 vm04 bash[19918]: audit 2026-03-08T23:22:19.588939+0000 mon.a (mon.0) 728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:19.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:19 vm04 bash[19918]: audit 2026-03-08T23:22:19.588939+0000 mon.a (mon.0) 728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:19.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:19 vm04 bash[19918]: audit 2026-03-08T23:22:19.591297+0000 mon.a (mon.0) 729 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:19.875 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:19 vm04 bash[19918]: audit 2026-03-08T23:22:19.591297+0000 mon.a (mon.0) 729 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:20.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[17457]: cluster 2026-03-08T23:22:18.219818+0000 mgr.x (mgr.14150) 268 : cluster [DBG] pgmap v235: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-08T23:22:20.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[17457]: cluster 2026-03-08T23:22:18.219818+0000 mgr.x (mgr.14150) 268 : cluster [DBG] pgmap v235: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-08T23:22:20.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[17457]: audit 2026-03-08T23:22:18.790748+0000 mon.a (mon.0) 725 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]': finished 2026-03-08T23:22:20.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[17457]: audit 2026-03-08T23:22:18.790748+0000 mon.a (mon.0) 725 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]': finished 2026-03-08T23:22:20.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[17457]: cluster 2026-03-08T23:22:18.795040+0000 mon.a (mon.0) 726 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-08T23:22:20.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[17457]: cluster 2026-03-08T23:22:18.795040+0000 mon.a (mon.0) 726 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-08T23:22:20.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[17457]: audit 2026-03-08T23:22:18.962923+0000 mon.a (mon.0) 727 : audit [INF] from='client.? 192.168.123.102:0/632811907' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3600721925"}]: dispatch 2026-03-08T23:22:20.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[17457]: audit 2026-03-08T23:22:18.962923+0000 mon.a (mon.0) 727 : audit [INF] from='client.? 192.168.123.102:0/632811907' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3600721925"}]: dispatch 2026-03-08T23:22:20.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[17457]: audit 2026-03-08T23:22:19.588545+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.104:0/3792508978' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:20.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[17457]: audit 2026-03-08T23:22:19.588545+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.104:0/3792508978' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:20.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[17457]: audit 2026-03-08T23:22:19.588939+0000 mon.a (mon.0) 728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:20.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[17457]: audit 2026-03-08T23:22:19.588939+0000 mon.a (mon.0) 728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:20.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[17457]: audit 2026-03-08T23:22:19.591297+0000 mon.a (mon.0) 729 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:20.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[17457]: audit 2026-03-08T23:22:19.591297+0000 mon.a (mon.0) 729 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:20.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[37757]: debug Successfully removed blocklist entry 2026-03-08T23:22:20.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[37757]: debug Reading the configuration object to update local LIO configuration 2026-03-08T23:22:20.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[37757]: debug Configuration does not have an entry for this host(vm02.local) - nothing to define to LIO 2026-03-08T23:22:20.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[37757]: * Serving Flask app 'rbd-target-api' (lazy loading) 2026-03-08T23:22:20.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[37757]: * Environment: production 2026-03-08T23:22:20.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[37757]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-08T23:22:20.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[37757]: Use a production WSGI server instead. 2026-03-08T23:22:20.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[37757]: * Debug mode: off 2026-03-08T23:22:20.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[37757]: debug * Running on all addresses. 2026-03-08T23:22:20.145 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[37757]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-08T23:22:20.145 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[37757]: * Running on all addresses. 2026-03-08T23:22:20.145 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[37757]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-08T23:22:20.145 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[37757]: debug * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-08T23:22:20.145 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[37757]: * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-08T23:22:20.145 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:19 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:22:20.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:19 vm10 bash[20034]: cluster 2026-03-08T23:22:18.219818+0000 mgr.x (mgr.14150) 268 : cluster [DBG] pgmap v235: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-08T23:22:20.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:19 vm10 bash[20034]: cluster 2026-03-08T23:22:18.219818+0000 mgr.x (mgr.14150) 268 : cluster [DBG] pgmap v235: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 4 op/s 2026-03-08T23:22:20.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:19 vm10 bash[20034]: audit 2026-03-08T23:22:18.790748+0000 mon.a (mon.0) 725 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]': finished 2026-03-08T23:22:20.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:19 vm10 bash[20034]: audit 2026-03-08T23:22:18.790748+0000 mon.a (mon.0) 725 : audit [INF] from='client.? ' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3619041204"}]': finished 2026-03-08T23:22:20.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:19 vm10 bash[20034]: cluster 2026-03-08T23:22:18.795040+0000 mon.a (mon.0) 726 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-08T23:22:20.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:19 vm10 bash[20034]: cluster 2026-03-08T23:22:18.795040+0000 mon.a (mon.0) 726 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-08T23:22:20.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:19 vm10 bash[20034]: audit 2026-03-08T23:22:18.962923+0000 mon.a (mon.0) 727 : audit [INF] from='client.? 192.168.123.102:0/632811907' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3600721925"}]: dispatch 2026-03-08T23:22:20.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:19 vm10 bash[20034]: audit 2026-03-08T23:22:18.962923+0000 mon.a (mon.0) 727 : audit [INF] from='client.? 192.168.123.102:0/632811907' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3600721925"}]: dispatch 2026-03-08T23:22:20.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:19 vm10 bash[20034]: audit 2026-03-08T23:22:19.588545+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.104:0/3792508978' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:20.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:19 vm10 bash[20034]: audit 2026-03-08T23:22:19.588545+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.104:0/3792508978' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:20.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:19 vm10 bash[20034]: audit 2026-03-08T23:22:19.588939+0000 mon.a (mon.0) 728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:20.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:19 vm10 bash[20034]: audit 2026-03-08T23:22:19.588939+0000 mon.a (mon.0) 728 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:20.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:19 vm10 bash[20034]: audit 2026-03-08T23:22:19.591297+0000 mon.a (mon.0) 729 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:20.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:19 vm10 bash[20034]: audit 2026-03-08T23:22:19.591297+0000 mon.a (mon.0) 729 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:21.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:20 vm04 bash[19918]: audit 2026-03-08T23:22:19.801680+0000 mon.a (mon.0) 730 : audit [INF] from='client.? 192.168.123.102:0/632811907' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3600721925"}]': finished 2026-03-08T23:22:21.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:20 vm04 bash[19918]: audit 2026-03-08T23:22:19.801680+0000 mon.a (mon.0) 730 : audit [INF] from='client.? 192.168.123.102:0/632811907' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3600721925"}]': finished 2026-03-08T23:22:21.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:20 vm04 bash[19918]: cluster 2026-03-08T23:22:19.803656+0000 mon.a (mon.0) 731 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-08T23:22:21.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:20 vm04 bash[19918]: cluster 2026-03-08T23:22:19.803656+0000 mon.a (mon.0) 731 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-08T23:22:21.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:20 vm04 bash[19918]: audit 2026-03-08T23:22:19.936943+0000 mgr.x (mgr.14150) 269 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:21.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:20 vm04 bash[19918]: audit 2026-03-08T23:22:19.936943+0000 mgr.x (mgr.14150) 269 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:21.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:20 vm02 bash[17457]: audit 2026-03-08T23:22:19.801680+0000 mon.a (mon.0) 730 : audit [INF] from='client.? 192.168.123.102:0/632811907' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3600721925"}]': finished 2026-03-08T23:22:21.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:20 vm02 bash[17457]: audit 2026-03-08T23:22:19.801680+0000 mon.a (mon.0) 730 : audit [INF] from='client.? 192.168.123.102:0/632811907' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3600721925"}]': finished 2026-03-08T23:22:21.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:20 vm02 bash[17457]: cluster 2026-03-08T23:22:19.803656+0000 mon.a (mon.0) 731 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-08T23:22:21.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:20 vm02 bash[17457]: cluster 2026-03-08T23:22:19.803656+0000 mon.a (mon.0) 731 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-08T23:22:21.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:20 vm02 bash[17457]: audit 2026-03-08T23:22:19.936943+0000 mgr.x (mgr.14150) 269 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:21.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:20 vm02 bash[17457]: audit 2026-03-08T23:22:19.936943+0000 mgr.x (mgr.14150) 269 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:21.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:20 vm10 bash[20034]: audit 2026-03-08T23:22:19.801680+0000 mon.a (mon.0) 730 : audit [INF] from='client.? 192.168.123.102:0/632811907' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3600721925"}]': finished 2026-03-08T23:22:21.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:20 vm10 bash[20034]: audit 2026-03-08T23:22:19.801680+0000 mon.a (mon.0) 730 : audit [INF] from='client.? 192.168.123.102:0/632811907' entity='client.iscsi.iscsi.a' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3600721925"}]': finished 2026-03-08T23:22:21.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:20 vm10 bash[20034]: cluster 2026-03-08T23:22:19.803656+0000 mon.a (mon.0) 731 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-08T23:22:21.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:20 vm10 bash[20034]: cluster 2026-03-08T23:22:19.803656+0000 mon.a (mon.0) 731 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-08T23:22:21.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:20 vm10 bash[20034]: audit 2026-03-08T23:22:19.936943+0000 mgr.x (mgr.14150) 269 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:21.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:20 vm10 bash[20034]: audit 2026-03-08T23:22:19.936943+0000 mgr.x (mgr.14150) 269 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:21.157 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:20 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:22:22.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:21 vm04 bash[19918]: cluster 2026-03-08T23:22:20.220014+0000 mgr.x (mgr.14150) 270 : cluster [DBG] pgmap v238: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:22.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:21 vm04 bash[19918]: cluster 2026-03-08T23:22:20.220014+0000 mgr.x (mgr.14150) 270 : cluster [DBG] pgmap v238: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:22.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:21 vm04 bash[19918]: audit 2026-03-08T23:22:20.891458+0000 mgr.x (mgr.14150) 271 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:22.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:21 vm04 bash[19918]: audit 2026-03-08T23:22:20.891458+0000 mgr.x (mgr.14150) 271 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:22.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:21 vm02 bash[17457]: cluster 2026-03-08T23:22:20.220014+0000 mgr.x (mgr.14150) 270 : cluster [DBG] pgmap v238: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:22.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:21 vm02 bash[17457]: cluster 2026-03-08T23:22:20.220014+0000 mgr.x (mgr.14150) 270 : cluster [DBG] pgmap v238: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:22.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:21 vm02 bash[17457]: audit 2026-03-08T23:22:20.891458+0000 mgr.x (mgr.14150) 271 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:22.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:21 vm02 bash[17457]: audit 2026-03-08T23:22:20.891458+0000 mgr.x (mgr.14150) 271 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:22.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:21 vm10 bash[20034]: cluster 2026-03-08T23:22:20.220014+0000 mgr.x (mgr.14150) 270 : cluster [DBG] pgmap v238: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:22.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:21 vm10 bash[20034]: cluster 2026-03-08T23:22:20.220014+0000 mgr.x (mgr.14150) 270 : cluster [DBG] pgmap v238: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:22.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:21 vm10 bash[20034]: audit 2026-03-08T23:22:20.891458+0000 mgr.x (mgr.14150) 271 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:22.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:21 vm10 bash[20034]: audit 2026-03-08T23:22:20.891458+0000 mgr.x (mgr.14150) 271 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:24.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:23 vm04 bash[19918]: cluster 2026-03-08T23:22:22.220232+0000 mgr.x (mgr.14150) 272 : cluster [DBG] pgmap v239: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-08T23:22:24.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:23 vm04 bash[19918]: cluster 2026-03-08T23:22:22.220232+0000 mgr.x (mgr.14150) 272 : cluster [DBG] pgmap v239: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-08T23:22:24.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:23 vm02 bash[17457]: cluster 2026-03-08T23:22:22.220232+0000 mgr.x (mgr.14150) 272 : cluster [DBG] pgmap v239: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-08T23:22:24.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:23 vm02 bash[17457]: cluster 2026-03-08T23:22:22.220232+0000 mgr.x (mgr.14150) 272 : cluster [DBG] pgmap v239: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-08T23:22:24.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:23 vm10 bash[20034]: cluster 2026-03-08T23:22:22.220232+0000 mgr.x (mgr.14150) 272 : cluster [DBG] pgmap v239: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-08T23:22:24.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:23 vm10 bash[20034]: cluster 2026-03-08T23:22:22.220232+0000 mgr.x (mgr.14150) 272 : cluster [DBG] pgmap v239: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-08T23:22:24.266 INFO:teuthology.orchestra.run.vm10.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.c/config 2026-03-08T23:22:24.555 INFO:teuthology.orchestra.run.vm10.stdout:[client.2] 2026-03-08T23:22:24.555 INFO:teuthology.orchestra.run.vm10.stdout: key = AQCwBK5pQHbVIBAAKyTE9Up6FJ0mRN+QZm2Ffw== 2026-03-08T23:22:24.607 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-08T23:22:24.607 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/ceph/ceph.client.2.keyring 2026-03-08T23:22:24.607 DEBUG:teuthology.orchestra.run.vm10:> sudo chmod 0644 /etc/ceph/ceph.client.2.keyring 2026-03-08T23:22:24.618 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-08T23:22:24.618 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-08T23:22:24.618 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph mgr dump --format=json 2026-03-08T23:22:24.829 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:24 vm10 bash[20034]: audit 2026-03-08T23:22:24.550394+0000 mon.c (mon.1) 19 : audit [INF] from='client.? 192.168.123.110:0/3337621208' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:24.829 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:24 vm10 bash[20034]: audit 2026-03-08T23:22:24.550394+0000 mon.c (mon.1) 19 : audit [INF] from='client.? 192.168.123.110:0/3337621208' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:24.829 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:24 vm10 bash[20034]: audit 2026-03-08T23:22:24.550762+0000 mon.a (mon.0) 732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:24.829 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:24 vm10 bash[20034]: audit 2026-03-08T23:22:24.550762+0000 mon.a (mon.0) 732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:24.829 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:24 vm10 bash[20034]: audit 2026-03-08T23:22:24.553522+0000 mon.a (mon.0) 733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:24.829 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:24 vm10 bash[20034]: audit 2026-03-08T23:22:24.553522+0000 mon.a (mon.0) 733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:24.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:24 vm02 bash[17457]: audit 2026-03-08T23:22:24.550394+0000 mon.c (mon.1) 19 : audit [INF] from='client.? 192.168.123.110:0/3337621208' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:24.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:24 vm02 bash[17457]: audit 2026-03-08T23:22:24.550394+0000 mon.c (mon.1) 19 : audit [INF] from='client.? 192.168.123.110:0/3337621208' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:24.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:24 vm02 bash[17457]: audit 2026-03-08T23:22:24.550762+0000 mon.a (mon.0) 732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:24.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:24 vm02 bash[17457]: audit 2026-03-08T23:22:24.550762+0000 mon.a (mon.0) 732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:24.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:24 vm02 bash[17457]: audit 2026-03-08T23:22:24.553522+0000 mon.a (mon.0) 733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:24.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:24 vm02 bash[17457]: audit 2026-03-08T23:22:24.553522+0000 mon.a (mon.0) 733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:25.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:24 vm04 bash[19918]: audit 2026-03-08T23:22:24.550394+0000 mon.c (mon.1) 19 : audit [INF] from='client.? 192.168.123.110:0/3337621208' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:25.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:24 vm04 bash[19918]: audit 2026-03-08T23:22:24.550394+0000 mon.c (mon.1) 19 : audit [INF] from='client.? 192.168.123.110:0/3337621208' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:25.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:24 vm04 bash[19918]: audit 2026-03-08T23:22:24.550762+0000 mon.a (mon.0) 732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:25.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:24 vm04 bash[19918]: audit 2026-03-08T23:22:24.550762+0000 mon.a (mon.0) 732 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:22:25.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:24 vm04 bash[19918]: audit 2026-03-08T23:22:24.553522+0000 mon.a (mon.0) 733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:25.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:24 vm04 bash[19918]: audit 2026-03-08T23:22:24.553522+0000 mon.a (mon.0) 733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.2", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:22:26.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:25 vm04 bash[19918]: cluster 2026-03-08T23:22:24.220466+0000 mgr.x (mgr.14150) 273 : cluster [DBG] pgmap v240: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-08T23:22:26.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:25 vm04 bash[19918]: cluster 2026-03-08T23:22:24.220466+0000 mgr.x (mgr.14150) 273 : cluster [DBG] pgmap v240: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-08T23:22:26.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:25 vm02 bash[17457]: cluster 2026-03-08T23:22:24.220466+0000 mgr.x (mgr.14150) 273 : cluster [DBG] pgmap v240: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-08T23:22:26.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:25 vm02 bash[17457]: cluster 2026-03-08T23:22:24.220466+0000 mgr.x (mgr.14150) 273 : cluster [DBG] pgmap v240: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-08T23:22:26.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:25 vm10 bash[20034]: cluster 2026-03-08T23:22:24.220466+0000 mgr.x (mgr.14150) 273 : cluster [DBG] pgmap v240: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-08T23:22:26.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:25 vm10 bash[20034]: cluster 2026-03-08T23:22:24.220466+0000 mgr.x (mgr.14150) 273 : cluster [DBG] pgmap v240: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-08T23:22:28.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:27 vm04 bash[19918]: cluster 2026-03-08T23:22:26.220737+0000 mgr.x (mgr.14150) 274 : cluster [DBG] pgmap v241: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:28.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:27 vm04 bash[19918]: cluster 2026-03-08T23:22:26.220737+0000 mgr.x (mgr.14150) 274 : cluster [DBG] pgmap v241: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:28.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:27 vm02 bash[17457]: cluster 2026-03-08T23:22:26.220737+0000 mgr.x (mgr.14150) 274 : cluster [DBG] pgmap v241: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:28.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:27 vm02 bash[17457]: cluster 2026-03-08T23:22:26.220737+0000 mgr.x (mgr.14150) 274 : cluster [DBG] pgmap v241: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:28.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:27 vm10 bash[20034]: cluster 2026-03-08T23:22:26.220737+0000 mgr.x (mgr.14150) 274 : cluster [DBG] pgmap v241: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:28.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:27 vm10 bash[20034]: cluster 2026-03-08T23:22:26.220737+0000 mgr.x (mgr.14150) 274 : cluster [DBG] pgmap v241: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:29.237 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:29.501 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:22:29.555 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":15,"flags":0,"active_gid":14150,"active_name":"x","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6800","nonce":2912384465},{"type":"v1","addr":"192.168.123.102:6801","nonce":2912384465}]},"active_addr":"192.168.123.102:6801/2912384465","active_change":"2026-03-08T23:16:14.166265+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.102:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"cephadm","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":1922785099}]},{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":1602158626}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":2369663363}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":3531267005}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":2627123225}]}]} 2026-03-08T23:22:29.556 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-08T23:22:29.556 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-08T23:22:29.556 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph osd dump --format=json 2026-03-08T23:22:30.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:29 vm04 bash[19918]: cluster 2026-03-08T23:22:28.221006+0000 mgr.x (mgr.14150) 275 : cluster [DBG] pgmap v242: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s 2026-03-08T23:22:30.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:29 vm04 bash[19918]: cluster 2026-03-08T23:22:28.221006+0000 mgr.x (mgr.14150) 275 : cluster [DBG] pgmap v242: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s 2026-03-08T23:22:30.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:29 vm04 bash[19918]: audit 2026-03-08T23:22:29.499712+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.102:0/1542554695' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-08T23:22:30.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:29 vm04 bash[19918]: audit 2026-03-08T23:22:29.499712+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.102:0/1542554695' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-08T23:22:30.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:29 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:22:30.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:29 vm02 bash[17457]: cluster 2026-03-08T23:22:28.221006+0000 mgr.x (mgr.14150) 275 : cluster [DBG] pgmap v242: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s 2026-03-08T23:22:30.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:29 vm02 bash[17457]: cluster 2026-03-08T23:22:28.221006+0000 mgr.x (mgr.14150) 275 : cluster [DBG] pgmap v242: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s 2026-03-08T23:22:30.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:29 vm02 bash[17457]: audit 2026-03-08T23:22:29.499712+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.102:0/1542554695' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-08T23:22:30.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:29 vm02 bash[17457]: audit 2026-03-08T23:22:29.499712+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.102:0/1542554695' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-08T23:22:30.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:29 vm10 bash[20034]: cluster 2026-03-08T23:22:28.221006+0000 mgr.x (mgr.14150) 275 : cluster [DBG] pgmap v242: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s 2026-03-08T23:22:30.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:29 vm10 bash[20034]: cluster 2026-03-08T23:22:28.221006+0000 mgr.x (mgr.14150) 275 : cluster [DBG] pgmap v242: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 2 op/s 2026-03-08T23:22:30.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:29 vm10 bash[20034]: audit 2026-03-08T23:22:29.499712+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.102:0/1542554695' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-08T23:22:30.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:29 vm10 bash[20034]: audit 2026-03-08T23:22:29.499712+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.102:0/1542554695' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-08T23:22:31.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:30 vm04 bash[19918]: audit 2026-03-08T23:22:29.943146+0000 mgr.x (mgr.14150) 276 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:31.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:30 vm04 bash[19918]: audit 2026-03-08T23:22:29.943146+0000 mgr.x (mgr.14150) 276 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:31.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:30 vm02 bash[17457]: audit 2026-03-08T23:22:29.943146+0000 mgr.x (mgr.14150) 276 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:31.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:30 vm02 bash[17457]: audit 2026-03-08T23:22:29.943146+0000 mgr.x (mgr.14150) 276 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:31.157 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:30 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:22:31.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:30 vm10 bash[20034]: audit 2026-03-08T23:22:29.943146+0000 mgr.x (mgr.14150) 276 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:31.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:30 vm10 bash[20034]: audit 2026-03-08T23:22:29.943146+0000 mgr.x (mgr.14150) 276 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:31 vm04 bash[19918]: cluster 2026-03-08T23:22:30.221266+0000 mgr.x (mgr.14150) 277 : cluster [DBG] pgmap v243: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.9 KiB/s rd, 1 op/s 2026-03-08T23:22:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:31 vm04 bash[19918]: cluster 2026-03-08T23:22:30.221266+0000 mgr.x (mgr.14150) 277 : cluster [DBG] pgmap v243: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.9 KiB/s rd, 1 op/s 2026-03-08T23:22:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:31 vm04 bash[19918]: audit 2026-03-08T23:22:30.898842+0000 mgr.x (mgr.14150) 278 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:31 vm04 bash[19918]: audit 2026-03-08T23:22:30.898842+0000 mgr.x (mgr.14150) 278 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:32.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:31 vm02 bash[17457]: cluster 2026-03-08T23:22:30.221266+0000 mgr.x (mgr.14150) 277 : cluster [DBG] pgmap v243: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.9 KiB/s rd, 1 op/s 2026-03-08T23:22:32.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:31 vm02 bash[17457]: cluster 2026-03-08T23:22:30.221266+0000 mgr.x (mgr.14150) 277 : cluster [DBG] pgmap v243: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.9 KiB/s rd, 1 op/s 2026-03-08T23:22:32.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:31 vm02 bash[17457]: audit 2026-03-08T23:22:30.898842+0000 mgr.x (mgr.14150) 278 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:32.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:31 vm02 bash[17457]: audit 2026-03-08T23:22:30.898842+0000 mgr.x (mgr.14150) 278 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:31 vm10 bash[20034]: cluster 2026-03-08T23:22:30.221266+0000 mgr.x (mgr.14150) 277 : cluster [DBG] pgmap v243: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.9 KiB/s rd, 1 op/s 2026-03-08T23:22:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:31 vm10 bash[20034]: cluster 2026-03-08T23:22:30.221266+0000 mgr.x (mgr.14150) 277 : cluster [DBG] pgmap v243: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.9 KiB/s rd, 1 op/s 2026-03-08T23:22:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:31 vm10 bash[20034]: audit 2026-03-08T23:22:30.898842+0000 mgr.x (mgr.14150) 278 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:31 vm10 bash[20034]: audit 2026-03-08T23:22:30.898842+0000 mgr.x (mgr.14150) 278 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:33.254 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:33.484 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:22:33.484 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":65,"fsid":"91105a84-1b44-11f1-9a43-e95894f13987","created":"2026-03-08T23:15:53.503597+0000","modified":"2026-03-08T23:22:19.791655+0000","last_up_change":"2026-03-08T23:21:36.385217+0000","last_in_change":"2026-03-08T23:21:20.261534+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-08T23:18:54.227629+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"datapool","create_time":"2026-03-08T23:21:55.631475+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"54","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":54,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":5.3299999237060547,"score_stable":5.3299999237060547,"optimal_score":0.75,"raw_score_acting":4,"raw_score_stable":4,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"4f9f4a95-f093-4c3b-af99-6c3664fdf90d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":25,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":706196410},{"type":"v1","addr":"192.168.123.102:6803","nonce":706196410}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":706196410},{"type":"v1","addr":"192.168.123.102:6805","nonce":706196410}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":706196410},{"type":"v1","addr":"192.168.123.102:6809","nonce":706196410}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":706196410},{"type":"v1","addr":"192.168.123.102:6807","nonce":706196410}]},"public_addr":"192.168.123.102:6803/706196410","cluster_addr":"192.168.123.102:6805/706196410","heartbeat_back_addr":"192.168.123.102:6809/706196410","heartbeat_front_addr":"192.168.123.102:6807/706196410","state":["exists","up"]},{"osd":1,"uuid":"329d7c16-85bb-4531-9c68-b1e468e49038","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":2405858986},{"type":"v1","addr":"192.168.123.102:6811","nonce":2405858986}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":2405858986},{"type":"v1","addr":"192.168.123.102:6813","nonce":2405858986}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6816","nonce":2405858986},{"type":"v1","addr":"192.168.123.102:6817","nonce":2405858986}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":2405858986},{"type":"v1","addr":"192.168.123.102:6815","nonce":2405858986}]},"public_addr":"192.168.123.102:6811/2405858986","cluster_addr":"192.168.123.102:6813/2405858986","heartbeat_back_addr":"192.168.123.102:6817/2405858986","heartbeat_front_addr":"192.168.123.102:6815/2405858986","state":["exists","up"]},{"osd":2,"uuid":"5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":49,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6800","nonce":1030884672},{"type":"v1","addr":"192.168.123.104:6801","nonce":1030884672}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":1030884672},{"type":"v1","addr":"192.168.123.104:6803","nonce":1030884672}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":1030884672},{"type":"v1","addr":"192.168.123.104:6807","nonce":1030884672}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":1030884672},{"type":"v1","addr":"192.168.123.104:6805","nonce":1030884672}]},"public_addr":"192.168.123.104:6801/1030884672","cluster_addr":"192.168.123.104:6803/1030884672","heartbeat_back_addr":"192.168.123.104:6807/1030884672","heartbeat_front_addr":"192.168.123.104:6805/1030884672","state":["exists","up"]},{"osd":3,"uuid":"754d7a6e-d6e9-4d53-b18d-fb8dd322dada","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":49,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":953613314},{"type":"v1","addr":"192.168.123.104:6809","nonce":953613314}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":953613314},{"type":"v1","addr":"192.168.123.104:6811","nonce":953613314}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":953613314},{"type":"v1","addr":"192.168.123.104:6815","nonce":953613314}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":953613314},{"type":"v1","addr":"192.168.123.104:6813","nonce":953613314}]},"public_addr":"192.168.123.104:6809/953613314","cluster_addr":"192.168.123.104:6811/953613314","heartbeat_back_addr":"192.168.123.104:6815/953613314","heartbeat_front_addr":"192.168.123.104:6813/953613314","state":["exists","up"]},{"osd":4,"uuid":"bfc224db-b68a-4579-b006-40bea8da3848","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":31,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6816","nonce":3877212940},{"type":"v1","addr":"192.168.123.104:6817","nonce":3877212940}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6818","nonce":3877212940},{"type":"v1","addr":"192.168.123.104:6819","nonce":3877212940}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6822","nonce":3877212940},{"type":"v1","addr":"192.168.123.104:6823","nonce":3877212940}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6820","nonce":3877212940},{"type":"v1","addr":"192.168.123.104:6821","nonce":3877212940}]},"public_addr":"192.168.123.104:6817/3877212940","cluster_addr":"192.168.123.104:6819/3877212940","heartbeat_back_addr":"192.168.123.104:6823/3877212940","heartbeat_front_addr":"192.168.123.104:6821/3877212940","state":["exists","up"]},{"osd":5,"uuid":"b6909095-51a9-4b9d-95f5-1d9f04559ea1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6800","nonce":3075842155},{"type":"v1","addr":"192.168.123.110:6801","nonce":3075842155}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6802","nonce":3075842155},{"type":"v1","addr":"192.168.123.110:6803","nonce":3075842155}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6806","nonce":3075842155},{"type":"v1","addr":"192.168.123.110:6807","nonce":3075842155}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6804","nonce":3075842155},{"type":"v1","addr":"192.168.123.110:6805","nonce":3075842155}]},"public_addr":"192.168.123.110:6801/3075842155","cluster_addr":"192.168.123.110:6803/3075842155","heartbeat_back_addr":"192.168.123.110:6807/3075842155","heartbeat_front_addr":"192.168.123.110:6805/3075842155","state":["exists","up"]},{"osd":6,"uuid":"488a0919-fe60-4b1d-844d-b16c2182536e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":42,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6808","nonce":275518458},{"type":"v1","addr":"192.168.123.110:6809","nonce":275518458}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6810","nonce":275518458},{"type":"v1","addr":"192.168.123.110:6811","nonce":275518458}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6814","nonce":275518458},{"type":"v1","addr":"192.168.123.110:6815","nonce":275518458}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6812","nonce":275518458},{"type":"v1","addr":"192.168.123.110:6813","nonce":275518458}]},"public_addr":"192.168.123.110:6809/275518458","cluster_addr":"192.168.123.110:6811/275518458","heartbeat_back_addr":"192.168.123.110:6815/275518458","heartbeat_front_addr":"192.168.123.110:6813/275518458","state":["exists","up"]},{"osd":7,"uuid":"aef086d3-44c6-4078-a3ac-f3b6f3a98df9","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":47,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6816","nonce":3535254497},{"type":"v1","addr":"192.168.123.110:6817","nonce":3535254497}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6818","nonce":3535254497},{"type":"v1","addr":"192.168.123.110:6819","nonce":3535254497}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6822","nonce":3535254497},{"type":"v1","addr":"192.168.123.110:6823","nonce":3535254497}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6820","nonce":3535254497},{"type":"v1","addr":"192.168.123.110:6821","nonce":3535254497}]},"public_addr":"192.168.123.110:6817/3535254497","cluster_addr":"192.168.123.110:6819/3535254497","heartbeat_back_addr":"192.168.123.110:6823/3535254497","heartbeat_front_addr":"192.168.123.110:6821/3535254497","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:17:45.685153+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:18:20.300742+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:18:52.316976+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:19:26.588768+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:19:59.033736+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:20:29.491958+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:21:01.901565+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:21:34.801133+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-08T23:22:33.536 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-08T23:22:33.536 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph osd dump --format=json 2026-03-08T23:22:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:33 vm04 bash[19918]: cluster 2026-03-08T23:22:32.221512+0000 mgr.x (mgr.14150) 279 : cluster [DBG] pgmap v244: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:33 vm04 bash[19918]: cluster 2026-03-08T23:22:32.221512+0000 mgr.x (mgr.14150) 279 : cluster [DBG] pgmap v244: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:33 vm04 bash[19918]: audit 2026-03-08T23:22:33.484664+0000 mon.a (mon.0) 734 : audit [DBG] from='client.? 192.168.123.102:0/561046767' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:22:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:33 vm04 bash[19918]: audit 2026-03-08T23:22:33.484664+0000 mon.a (mon.0) 734 : audit [DBG] from='client.? 192.168.123.102:0/561046767' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:22:34.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:33 vm02 bash[17457]: cluster 2026-03-08T23:22:32.221512+0000 mgr.x (mgr.14150) 279 : cluster [DBG] pgmap v244: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:34.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:33 vm02 bash[17457]: cluster 2026-03-08T23:22:32.221512+0000 mgr.x (mgr.14150) 279 : cluster [DBG] pgmap v244: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:34.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:33 vm02 bash[17457]: audit 2026-03-08T23:22:33.484664+0000 mon.a (mon.0) 734 : audit [DBG] from='client.? 192.168.123.102:0/561046767' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:22:34.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:33 vm02 bash[17457]: audit 2026-03-08T23:22:33.484664+0000 mon.a (mon.0) 734 : audit [DBG] from='client.? 192.168.123.102:0/561046767' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:22:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:33 vm10 bash[20034]: cluster 2026-03-08T23:22:32.221512+0000 mgr.x (mgr.14150) 279 : cluster [DBG] pgmap v244: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:33 vm10 bash[20034]: cluster 2026-03-08T23:22:32.221512+0000 mgr.x (mgr.14150) 279 : cluster [DBG] pgmap v244: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:33 vm10 bash[20034]: audit 2026-03-08T23:22:33.484664+0000 mon.a (mon.0) 734 : audit [DBG] from='client.? 192.168.123.102:0/561046767' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:22:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:33 vm10 bash[20034]: audit 2026-03-08T23:22:33.484664+0000 mon.a (mon.0) 734 : audit [DBG] from='client.? 192.168.123.102:0/561046767' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:22:36.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:35 vm04 bash[19918]: cluster 2026-03-08T23:22:34.221774+0000 mgr.x (mgr.14150) 280 : cluster [DBG] pgmap v245: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:36.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:35 vm04 bash[19918]: cluster 2026-03-08T23:22:34.221774+0000 mgr.x (mgr.14150) 280 : cluster [DBG] pgmap v245: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:35 vm02 bash[17457]: cluster 2026-03-08T23:22:34.221774+0000 mgr.x (mgr.14150) 280 : cluster [DBG] pgmap v245: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:35 vm02 bash[17457]: cluster 2026-03-08T23:22:34.221774+0000 mgr.x (mgr.14150) 280 : cluster [DBG] pgmap v245: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:35 vm10 bash[20034]: cluster 2026-03-08T23:22:34.221774+0000 mgr.x (mgr.14150) 280 : cluster [DBG] pgmap v245: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:35 vm10 bash[20034]: cluster 2026-03-08T23:22:34.221774+0000 mgr.x (mgr.14150) 280 : cluster [DBG] pgmap v245: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:37.269 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:37.519 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:22:37.519 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":65,"fsid":"91105a84-1b44-11f1-9a43-e95894f13987","created":"2026-03-08T23:15:53.503597+0000","modified":"2026-03-08T23:22:19.791655+0000","last_up_change":"2026-03-08T23:21:36.385217+0000","last_in_change":"2026-03-08T23:21:20.261534+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-08T23:18:54.227629+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"datapool","create_time":"2026-03-08T23:21:55.631475+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"54","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":54,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":5.3299999237060547,"score_stable":5.3299999237060547,"optimal_score":0.75,"raw_score_acting":4,"raw_score_stable":4,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"4f9f4a95-f093-4c3b-af99-6c3664fdf90d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":25,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":706196410},{"type":"v1","addr":"192.168.123.102:6803","nonce":706196410}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":706196410},{"type":"v1","addr":"192.168.123.102:6805","nonce":706196410}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":706196410},{"type":"v1","addr":"192.168.123.102:6809","nonce":706196410}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":706196410},{"type":"v1","addr":"192.168.123.102:6807","nonce":706196410}]},"public_addr":"192.168.123.102:6803/706196410","cluster_addr":"192.168.123.102:6805/706196410","heartbeat_back_addr":"192.168.123.102:6809/706196410","heartbeat_front_addr":"192.168.123.102:6807/706196410","state":["exists","up"]},{"osd":1,"uuid":"329d7c16-85bb-4531-9c68-b1e468e49038","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":2405858986},{"type":"v1","addr":"192.168.123.102:6811","nonce":2405858986}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":2405858986},{"type":"v1","addr":"192.168.123.102:6813","nonce":2405858986}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6816","nonce":2405858986},{"type":"v1","addr":"192.168.123.102:6817","nonce":2405858986}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":2405858986},{"type":"v1","addr":"192.168.123.102:6815","nonce":2405858986}]},"public_addr":"192.168.123.102:6811/2405858986","cluster_addr":"192.168.123.102:6813/2405858986","heartbeat_back_addr":"192.168.123.102:6817/2405858986","heartbeat_front_addr":"192.168.123.102:6815/2405858986","state":["exists","up"]},{"osd":2,"uuid":"5ce9efa5-ea2f-41d6-a3a7-fd4b1153686b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":49,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6800","nonce":1030884672},{"type":"v1","addr":"192.168.123.104:6801","nonce":1030884672}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":1030884672},{"type":"v1","addr":"192.168.123.104:6803","nonce":1030884672}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":1030884672},{"type":"v1","addr":"192.168.123.104:6807","nonce":1030884672}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":1030884672},{"type":"v1","addr":"192.168.123.104:6805","nonce":1030884672}]},"public_addr":"192.168.123.104:6801/1030884672","cluster_addr":"192.168.123.104:6803/1030884672","heartbeat_back_addr":"192.168.123.104:6807/1030884672","heartbeat_front_addr":"192.168.123.104:6805/1030884672","state":["exists","up"]},{"osd":3,"uuid":"754d7a6e-d6e9-4d53-b18d-fb8dd322dada","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":49,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":953613314},{"type":"v1","addr":"192.168.123.104:6809","nonce":953613314}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":953613314},{"type":"v1","addr":"192.168.123.104:6811","nonce":953613314}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":953613314},{"type":"v1","addr":"192.168.123.104:6815","nonce":953613314}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":953613314},{"type":"v1","addr":"192.168.123.104:6813","nonce":953613314}]},"public_addr":"192.168.123.104:6809/953613314","cluster_addr":"192.168.123.104:6811/953613314","heartbeat_back_addr":"192.168.123.104:6815/953613314","heartbeat_front_addr":"192.168.123.104:6813/953613314","state":["exists","up"]},{"osd":4,"uuid":"bfc224db-b68a-4579-b006-40bea8da3848","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":31,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6816","nonce":3877212940},{"type":"v1","addr":"192.168.123.104:6817","nonce":3877212940}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6818","nonce":3877212940},{"type":"v1","addr":"192.168.123.104:6819","nonce":3877212940}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6822","nonce":3877212940},{"type":"v1","addr":"192.168.123.104:6823","nonce":3877212940}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6820","nonce":3877212940},{"type":"v1","addr":"192.168.123.104:6821","nonce":3877212940}]},"public_addr":"192.168.123.104:6817/3877212940","cluster_addr":"192.168.123.104:6819/3877212940","heartbeat_back_addr":"192.168.123.104:6823/3877212940","heartbeat_front_addr":"192.168.123.104:6821/3877212940","state":["exists","up"]},{"osd":5,"uuid":"b6909095-51a9-4b9d-95f5-1d9f04559ea1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6800","nonce":3075842155},{"type":"v1","addr":"192.168.123.110:6801","nonce":3075842155}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6802","nonce":3075842155},{"type":"v1","addr":"192.168.123.110:6803","nonce":3075842155}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6806","nonce":3075842155},{"type":"v1","addr":"192.168.123.110:6807","nonce":3075842155}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6804","nonce":3075842155},{"type":"v1","addr":"192.168.123.110:6805","nonce":3075842155}]},"public_addr":"192.168.123.110:6801/3075842155","cluster_addr":"192.168.123.110:6803/3075842155","heartbeat_back_addr":"192.168.123.110:6807/3075842155","heartbeat_front_addr":"192.168.123.110:6805/3075842155","state":["exists","up"]},{"osd":6,"uuid":"488a0919-fe60-4b1d-844d-b16c2182536e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":42,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6808","nonce":275518458},{"type":"v1","addr":"192.168.123.110:6809","nonce":275518458}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6810","nonce":275518458},{"type":"v1","addr":"192.168.123.110:6811","nonce":275518458}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6814","nonce":275518458},{"type":"v1","addr":"192.168.123.110:6815","nonce":275518458}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6812","nonce":275518458},{"type":"v1","addr":"192.168.123.110:6813","nonce":275518458}]},"public_addr":"192.168.123.110:6809/275518458","cluster_addr":"192.168.123.110:6811/275518458","heartbeat_back_addr":"192.168.123.110:6815/275518458","heartbeat_front_addr":"192.168.123.110:6813/275518458","state":["exists","up"]},{"osd":7,"uuid":"aef086d3-44c6-4078-a3ac-f3b6f3a98df9","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":47,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6816","nonce":3535254497},{"type":"v1","addr":"192.168.123.110:6817","nonce":3535254497}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6818","nonce":3535254497},{"type":"v1","addr":"192.168.123.110:6819","nonce":3535254497}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6822","nonce":3535254497},{"type":"v1","addr":"192.168.123.110:6823","nonce":3535254497}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.110:6820","nonce":3535254497},{"type":"v1","addr":"192.168.123.110:6821","nonce":3535254497}]},"public_addr":"192.168.123.110:6817/3535254497","cluster_addr":"192.168.123.110:6819/3535254497","heartbeat_back_addr":"192.168.123.110:6823/3535254497","heartbeat_front_addr":"192.168.123.110:6821/3535254497","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:17:45.685153+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:18:20.300742+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:18:52.316976+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:19:26.588768+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:19:59.033736+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:20:29.491958+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:21:01.901565+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:21:34.801133+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-08T23:22:37.572 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph tell osd.0 flush_pg_stats 2026-03-08T23:22:37.572 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph tell osd.1 flush_pg_stats 2026-03-08T23:22:37.573 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph tell osd.2 flush_pg_stats 2026-03-08T23:22:37.573 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph tell osd.3 flush_pg_stats 2026-03-08T23:22:37.573 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph tell osd.4 flush_pg_stats 2026-03-08T23:22:37.573 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph tell osd.5 flush_pg_stats 2026-03-08T23:22:37.573 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph tell osd.6 flush_pg_stats 2026-03-08T23:22:37.573 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph tell osd.7 flush_pg_stats 2026-03-08T23:22:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:37 vm04 bash[19918]: cluster 2026-03-08T23:22:36.222053+0000 mgr.x (mgr.14150) 281 : cluster [DBG] pgmap v246: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:37 vm04 bash[19918]: cluster 2026-03-08T23:22:36.222053+0000 mgr.x (mgr.14150) 281 : cluster [DBG] pgmap v246: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:37 vm04 bash[19918]: audit 2026-03-08T23:22:37.519151+0000 mon.a (mon.0) 735 : audit [DBG] from='client.? 192.168.123.102:0/3299174947' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:22:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:37 vm04 bash[19918]: audit 2026-03-08T23:22:37.519151+0000 mon.a (mon.0) 735 : audit [DBG] from='client.? 192.168.123.102:0/3299174947' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:22:38.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:37 vm02 bash[17457]: cluster 2026-03-08T23:22:36.222053+0000 mgr.x (mgr.14150) 281 : cluster [DBG] pgmap v246: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:38.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:37 vm02 bash[17457]: cluster 2026-03-08T23:22:36.222053+0000 mgr.x (mgr.14150) 281 : cluster [DBG] pgmap v246: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:38.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:37 vm02 bash[17457]: audit 2026-03-08T23:22:37.519151+0000 mon.a (mon.0) 735 : audit [DBG] from='client.? 192.168.123.102:0/3299174947' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:22:38.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:37 vm02 bash[17457]: audit 2026-03-08T23:22:37.519151+0000 mon.a (mon.0) 735 : audit [DBG] from='client.? 192.168.123.102:0/3299174947' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:22:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:37 vm10 bash[20034]: cluster 2026-03-08T23:22:36.222053+0000 mgr.x (mgr.14150) 281 : cluster [DBG] pgmap v246: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:37 vm10 bash[20034]: cluster 2026-03-08T23:22:36.222053+0000 mgr.x (mgr.14150) 281 : cluster [DBG] pgmap v246: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:37 vm10 bash[20034]: audit 2026-03-08T23:22:37.519151+0000 mon.a (mon.0) 735 : audit [DBG] from='client.? 192.168.123.102:0/3299174947' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:22:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:37 vm10 bash[20034]: audit 2026-03-08T23:22:37.519151+0000 mon.a (mon.0) 735 : audit [DBG] from='client.? 192.168.123.102:0/3299174947' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:22:40.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:39 vm02 bash[17457]: cluster 2026-03-08T23:22:38.222323+0000 mgr.x (mgr.14150) 282 : cluster [DBG] pgmap v247: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:40.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:39 vm02 bash[17457]: cluster 2026-03-08T23:22:38.222323+0000 mgr.x (mgr.14150) 282 : cluster [DBG] pgmap v247: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:40.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:39 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:22:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:39 vm10 bash[20034]: cluster 2026-03-08T23:22:38.222323+0000 mgr.x (mgr.14150) 282 : cluster [DBG] pgmap v247: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:39 vm10 bash[20034]: cluster 2026-03-08T23:22:38.222323+0000 mgr.x (mgr.14150) 282 : cluster [DBG] pgmap v247: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:40.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:39 vm04 bash[19918]: cluster 2026-03-08T23:22:38.222323+0000 mgr.x (mgr.14150) 282 : cluster [DBG] pgmap v247: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:40.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:39 vm04 bash[19918]: cluster 2026-03-08T23:22:38.222323+0000 mgr.x (mgr.14150) 282 : cluster [DBG] pgmap v247: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:41.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:40 vm02 bash[17457]: audit 2026-03-08T23:22:39.953658+0000 mgr.x (mgr.14150) 283 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:41.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:40 vm02 bash[17457]: audit 2026-03-08T23:22:39.953658+0000 mgr.x (mgr.14150) 283 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:41.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:40 vm10 bash[20034]: audit 2026-03-08T23:22:39.953658+0000 mgr.x (mgr.14150) 283 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:41.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:40 vm10 bash[20034]: audit 2026-03-08T23:22:39.953658+0000 mgr.x (mgr.14150) 283 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:41.157 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:40 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:22:41.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:40 vm04 bash[19918]: audit 2026-03-08T23:22:39.953658+0000 mgr.x (mgr.14150) 283 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:41.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:40 vm04 bash[19918]: audit 2026-03-08T23:22:39.953658+0000 mgr.x (mgr.14150) 283 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:41 vm02 bash[17457]: cluster 2026-03-08T23:22:40.222597+0000 mgr.x (mgr.14150) 284 : cluster [DBG] pgmap v248: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:41 vm02 bash[17457]: cluster 2026-03-08T23:22:40.222597+0000 mgr.x (mgr.14150) 284 : cluster [DBG] pgmap v248: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:41 vm02 bash[17457]: audit 2026-03-08T23:22:40.909427+0000 mgr.x (mgr.14150) 285 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:41 vm02 bash[17457]: audit 2026-03-08T23:22:40.909427+0000 mgr.x (mgr.14150) 285 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:41 vm10 bash[20034]: cluster 2026-03-08T23:22:40.222597+0000 mgr.x (mgr.14150) 284 : cluster [DBG] pgmap v248: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:41 vm10 bash[20034]: cluster 2026-03-08T23:22:40.222597+0000 mgr.x (mgr.14150) 284 : cluster [DBG] pgmap v248: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:41 vm10 bash[20034]: audit 2026-03-08T23:22:40.909427+0000 mgr.x (mgr.14150) 285 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:41 vm10 bash[20034]: audit 2026-03-08T23:22:40.909427+0000 mgr.x (mgr.14150) 285 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:42.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:41 vm04 bash[19918]: cluster 2026-03-08T23:22:40.222597+0000 mgr.x (mgr.14150) 284 : cluster [DBG] pgmap v248: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:42.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:41 vm04 bash[19918]: cluster 2026-03-08T23:22:40.222597+0000 mgr.x (mgr.14150) 284 : cluster [DBG] pgmap v248: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:42.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:41 vm04 bash[19918]: audit 2026-03-08T23:22:40.909427+0000 mgr.x (mgr.14150) 285 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:42.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:41 vm04 bash[19918]: audit 2026-03-08T23:22:40.909427+0000 mgr.x (mgr.14150) 285 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:42.506 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:42.508 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:42.510 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:42.512 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:42.513 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:42.515 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:42.516 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:42.518 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:43.057 INFO:teuthology.orchestra.run.vm02.stdout:133143986211 2026-03-08T23:22:43.057 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph osd last-stat-seq osd.4 2026-03-08T23:22:43.287 INFO:teuthology.orchestra.run.vm02.stdout:154618822684 2026-03-08T23:22:43.287 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph osd last-stat-seq osd.5 2026-03-08T23:22:43.314 INFO:teuthology.orchestra.run.vm02.stdout:180388626453 2026-03-08T23:22:43.314 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph osd last-stat-seq osd.6 2026-03-08T23:22:43.325 INFO:teuthology.orchestra.run.vm02.stdout:201863462927 2026-03-08T23:22:43.325 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph osd last-stat-seq osd.7 2026-03-08T23:22:43.392 INFO:teuthology.orchestra.run.vm02.stdout:34359738429 2026-03-08T23:22:43.392 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph osd last-stat-seq osd.0 2026-03-08T23:22:43.397 INFO:teuthology.orchestra.run.vm02.stdout:55834574901 2026-03-08T23:22:43.397 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph osd last-stat-seq osd.1 2026-03-08T23:22:43.415 INFO:teuthology.orchestra.run.vm02.stdout:107374182440 2026-03-08T23:22:43.415 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph osd last-stat-seq osd.3 2026-03-08T23:22:43.423 INFO:teuthology.orchestra.run.vm02.stdout:77309411375 2026-03-08T23:22:43.424 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph osd last-stat-seq osd.2 2026-03-08T23:22:44.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:43 vm10 bash[20034]: cluster 2026-03-08T23:22:42.222838+0000 mgr.x (mgr.14150) 286 : cluster [DBG] pgmap v249: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:44.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:43 vm10 bash[20034]: cluster 2026-03-08T23:22:42.222838+0000 mgr.x (mgr.14150) 286 : cluster [DBG] pgmap v249: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:44.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:43 vm04 bash[19918]: cluster 2026-03-08T23:22:42.222838+0000 mgr.x (mgr.14150) 286 : cluster [DBG] pgmap v249: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:44.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:43 vm04 bash[19918]: cluster 2026-03-08T23:22:42.222838+0000 mgr.x (mgr.14150) 286 : cluster [DBG] pgmap v249: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:43 vm02 bash[17457]: cluster 2026-03-08T23:22:42.222838+0000 mgr.x (mgr.14150) 286 : cluster [DBG] pgmap v249: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:44.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:43 vm02 bash[17457]: cluster 2026-03-08T23:22:42.222838+0000 mgr.x (mgr.14150) 286 : cluster [DBG] pgmap v249: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:46.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:45 vm10 bash[20034]: cluster 2026-03-08T23:22:44.223086+0000 mgr.x (mgr.14150) 287 : cluster [DBG] pgmap v250: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s rd, 2 op/s 2026-03-08T23:22:46.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:45 vm10 bash[20034]: cluster 2026-03-08T23:22:44.223086+0000 mgr.x (mgr.14150) 287 : cluster [DBG] pgmap v250: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s rd, 2 op/s 2026-03-08T23:22:46.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:45 vm04 bash[19918]: cluster 2026-03-08T23:22:44.223086+0000 mgr.x (mgr.14150) 287 : cluster [DBG] pgmap v250: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s rd, 2 op/s 2026-03-08T23:22:46.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:45 vm04 bash[19918]: cluster 2026-03-08T23:22:44.223086+0000 mgr.x (mgr.14150) 287 : cluster [DBG] pgmap v250: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s rd, 2 op/s 2026-03-08T23:22:46.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:45 vm02 bash[17457]: cluster 2026-03-08T23:22:44.223086+0000 mgr.x (mgr.14150) 287 : cluster [DBG] pgmap v250: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s rd, 2 op/s 2026-03-08T23:22:46.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:45 vm02 bash[17457]: cluster 2026-03-08T23:22:44.223086+0000 mgr.x (mgr.14150) 287 : cluster [DBG] pgmap v250: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s rd, 2 op/s 2026-03-08T23:22:47.755 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:47.757 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:47.757 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:47.760 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:47.761 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:47.763 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:47.764 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:47.766 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:48.056 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:47 vm02 bash[17457]: cluster 2026-03-08T23:22:46.223691+0000 mgr.x (mgr.14150) 288 : cluster [DBG] pgmap v251: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:48.056 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:47 vm02 bash[17457]: cluster 2026-03-08T23:22:46.223691+0000 mgr.x (mgr.14150) 288 : cluster [DBG] pgmap v251: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:48.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:47 vm10 bash[20034]: cluster 2026-03-08T23:22:46.223691+0000 mgr.x (mgr.14150) 288 : cluster [DBG] pgmap v251: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:48.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:47 vm10 bash[20034]: cluster 2026-03-08T23:22:46.223691+0000 mgr.x (mgr.14150) 288 : cluster [DBG] pgmap v251: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:48.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:47 vm04 bash[19918]: cluster 2026-03-08T23:22:46.223691+0000 mgr.x (mgr.14150) 288 : cluster [DBG] pgmap v251: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:48.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:47 vm04 bash[19918]: cluster 2026-03-08T23:22:46.223691+0000 mgr.x (mgr.14150) 288 : cluster [DBG] pgmap v251: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:48.459 INFO:teuthology.orchestra.run.vm02.stdout:180388626454 2026-03-08T23:22:48.529 INFO:teuthology.orchestra.run.vm02.stdout:55834574902 2026-03-08T23:22:48.593 INFO:tasks.cephadm.ceph_manager.ceph:need seq 180388626453 got 180388626454 for osd.6 2026-03-08T23:22:48.593 DEBUG:teuthology.parallel:result is None 2026-03-08T23:22:48.680 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574901 got 55834574902 for osd.1 2026-03-08T23:22:48.680 DEBUG:teuthology.parallel:result is None 2026-03-08T23:22:48.734 INFO:teuthology.orchestra.run.vm02.stdout:34359738429 2026-03-08T23:22:48.739 INFO:teuthology.orchestra.run.vm02.stdout:154618822684 2026-03-08T23:22:48.795 INFO:teuthology.orchestra.run.vm02.stdout:201863462927 2026-03-08T23:22:48.825 INFO:teuthology.orchestra.run.vm02.stdout:77309411376 2026-03-08T23:22:48.832 INFO:teuthology.orchestra.run.vm02.stdout:107374182441 2026-03-08T23:22:48.848 INFO:teuthology.orchestra.run.vm02.stdout:133143986211 2026-03-08T23:22:48.899 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738429 got 34359738429 for osd.0 2026-03-08T23:22:48.899 DEBUG:teuthology.parallel:result is None 2026-03-08T23:22:48.945 INFO:tasks.cephadm.ceph_manager.ceph:need seq 201863462927 got 201863462927 for osd.7 2026-03-08T23:22:48.945 DEBUG:teuthology.parallel:result is None 2026-03-08T23:22:48.957 INFO:tasks.cephadm.ceph_manager.ceph:need seq 154618822684 got 154618822684 for osd.5 2026-03-08T23:22:48.957 DEBUG:teuthology.parallel:result is None 2026-03-08T23:22:48.997 INFO:tasks.cephadm.ceph_manager.ceph:need seq 107374182440 got 107374182441 for osd.3 2026-03-08T23:22:48.997 DEBUG:teuthology.parallel:result is None 2026-03-08T23:22:49.018 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411375 got 77309411376 for osd.2 2026-03-08T23:22:49.018 DEBUG:teuthology.parallel:result is None 2026-03-08T23:22:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:48 vm02 bash[17457]: audit 2026-03-08T23:22:48.454967+0000 mon.a (mon.0) 736 : audit [DBG] from='client.? 192.168.123.102:0/3496732136' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-08T23:22:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:48 vm02 bash[17457]: audit 2026-03-08T23:22:48.454967+0000 mon.a (mon.0) 736 : audit [DBG] from='client.? 192.168.123.102:0/3496732136' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-08T23:22:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:48 vm02 bash[17457]: audit 2026-03-08T23:22:48.527096+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.102:0/2203741617' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-08T23:22:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:48 vm02 bash[17457]: audit 2026-03-08T23:22:48.527096+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.102:0/2203741617' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-08T23:22:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:48 vm02 bash[17457]: audit 2026-03-08T23:22:48.726930+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.102:0/1774201909' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-08T23:22:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:48 vm02 bash[17457]: audit 2026-03-08T23:22:48.726930+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.102:0/1774201909' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-08T23:22:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:48 vm02 bash[17457]: audit 2026-03-08T23:22:48.739241+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.102:0/3088955115' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-08T23:22:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:48 vm02 bash[17457]: audit 2026-03-08T23:22:48.739241+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.102:0/3088955115' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-08T23:22:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:48 vm02 bash[17457]: audit 2026-03-08T23:22:48.790637+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.102:0/3311382377' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-08T23:22:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:48 vm02 bash[17457]: audit 2026-03-08T23:22:48.790637+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.102:0/3311382377' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-08T23:22:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:48 vm02 bash[17457]: audit 2026-03-08T23:22:48.819863+0000 mon.c (mon.1) 23 : audit [DBG] from='client.? 192.168.123.102:0/1171831841' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-08T23:22:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:48 vm02 bash[17457]: audit 2026-03-08T23:22:48.819863+0000 mon.c (mon.1) 23 : audit [DBG] from='client.? 192.168.123.102:0/1171831841' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-08T23:22:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:48 vm02 bash[17457]: audit 2026-03-08T23:22:48.832168+0000 mon.b (mon.2) 21 : audit [DBG] from='client.? 192.168.123.102:0/1140312265' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-08T23:22:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:48 vm02 bash[17457]: audit 2026-03-08T23:22:48.832168+0000 mon.b (mon.2) 21 : audit [DBG] from='client.? 192.168.123.102:0/1140312265' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-08T23:22:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:48 vm02 bash[17457]: audit 2026-03-08T23:22:48.846383+0000 mon.a (mon.0) 738 : audit [DBG] from='client.? 192.168.123.102:0/1821380888' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-08T23:22:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:48 vm02 bash[17457]: audit 2026-03-08T23:22:48.846383+0000 mon.a (mon.0) 738 : audit [DBG] from='client.? 192.168.123.102:0/1821380888' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-08T23:22:49.025 INFO:tasks.cephadm.ceph_manager.ceph:need seq 133143986211 got 133143986211 for osd.4 2026-03-08T23:22:49.025 DEBUG:teuthology.parallel:result is None 2026-03-08T23:22:49.025 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-08T23:22:49.026 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph pg dump --format=json 2026-03-08T23:22:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:48 vm04 bash[19918]: audit 2026-03-08T23:22:48.454967+0000 mon.a (mon.0) 736 : audit [DBG] from='client.? 192.168.123.102:0/3496732136' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-08T23:22:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:48 vm04 bash[19918]: audit 2026-03-08T23:22:48.454967+0000 mon.a (mon.0) 736 : audit [DBG] from='client.? 192.168.123.102:0/3496732136' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-08T23:22:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:48 vm04 bash[19918]: audit 2026-03-08T23:22:48.527096+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.102:0/2203741617' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-08T23:22:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:48 vm04 bash[19918]: audit 2026-03-08T23:22:48.527096+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.102:0/2203741617' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-08T23:22:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:48 vm04 bash[19918]: audit 2026-03-08T23:22:48.726930+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.102:0/1774201909' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-08T23:22:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:48 vm04 bash[19918]: audit 2026-03-08T23:22:48.726930+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.102:0/1774201909' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-08T23:22:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:48 vm04 bash[19918]: audit 2026-03-08T23:22:48.739241+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.102:0/3088955115' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-08T23:22:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:48 vm04 bash[19918]: audit 2026-03-08T23:22:48.739241+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.102:0/3088955115' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-08T23:22:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:48 vm04 bash[19918]: audit 2026-03-08T23:22:48.790637+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.102:0/3311382377' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-08T23:22:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:48 vm04 bash[19918]: audit 2026-03-08T23:22:48.790637+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.102:0/3311382377' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-08T23:22:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:48 vm04 bash[19918]: audit 2026-03-08T23:22:48.819863+0000 mon.c (mon.1) 23 : audit [DBG] from='client.? 192.168.123.102:0/1171831841' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-08T23:22:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:48 vm04 bash[19918]: audit 2026-03-08T23:22:48.819863+0000 mon.c (mon.1) 23 : audit [DBG] from='client.? 192.168.123.102:0/1171831841' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-08T23:22:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:48 vm04 bash[19918]: audit 2026-03-08T23:22:48.832168+0000 mon.b (mon.2) 21 : audit [DBG] from='client.? 192.168.123.102:0/1140312265' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-08T23:22:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:48 vm04 bash[19918]: audit 2026-03-08T23:22:48.832168+0000 mon.b (mon.2) 21 : audit [DBG] from='client.? 192.168.123.102:0/1140312265' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-08T23:22:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:48 vm04 bash[19918]: audit 2026-03-08T23:22:48.846383+0000 mon.a (mon.0) 738 : audit [DBG] from='client.? 192.168.123.102:0/1821380888' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-08T23:22:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:48 vm04 bash[19918]: audit 2026-03-08T23:22:48.846383+0000 mon.a (mon.0) 738 : audit [DBG] from='client.? 192.168.123.102:0/1821380888' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-08T23:22:49.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:48 vm10 bash[20034]: audit 2026-03-08T23:22:48.454967+0000 mon.a (mon.0) 736 : audit [DBG] from='client.? 192.168.123.102:0/3496732136' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-08T23:22:49.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:48 vm10 bash[20034]: audit 2026-03-08T23:22:48.454967+0000 mon.a (mon.0) 736 : audit [DBG] from='client.? 192.168.123.102:0/3496732136' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-08T23:22:49.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:48 vm10 bash[20034]: audit 2026-03-08T23:22:48.527096+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.102:0/2203741617' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-08T23:22:49.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:48 vm10 bash[20034]: audit 2026-03-08T23:22:48.527096+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.102:0/2203741617' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-08T23:22:49.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:48 vm10 bash[20034]: audit 2026-03-08T23:22:48.726930+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.102:0/1774201909' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-08T23:22:49.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:48 vm10 bash[20034]: audit 2026-03-08T23:22:48.726930+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.102:0/1774201909' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-08T23:22:49.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:48 vm10 bash[20034]: audit 2026-03-08T23:22:48.739241+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.102:0/3088955115' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-08T23:22:49.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:48 vm10 bash[20034]: audit 2026-03-08T23:22:48.739241+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.102:0/3088955115' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-08T23:22:49.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:48 vm10 bash[20034]: audit 2026-03-08T23:22:48.790637+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.102:0/3311382377' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-08T23:22:49.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:48 vm10 bash[20034]: audit 2026-03-08T23:22:48.790637+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.102:0/3311382377' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-08T23:22:49.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:48 vm10 bash[20034]: audit 2026-03-08T23:22:48.819863+0000 mon.c (mon.1) 23 : audit [DBG] from='client.? 192.168.123.102:0/1171831841' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-08T23:22:49.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:48 vm10 bash[20034]: audit 2026-03-08T23:22:48.819863+0000 mon.c (mon.1) 23 : audit [DBG] from='client.? 192.168.123.102:0/1171831841' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-08T23:22:49.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:48 vm10 bash[20034]: audit 2026-03-08T23:22:48.832168+0000 mon.b (mon.2) 21 : audit [DBG] from='client.? 192.168.123.102:0/1140312265' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-08T23:22:49.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:48 vm10 bash[20034]: audit 2026-03-08T23:22:48.832168+0000 mon.b (mon.2) 21 : audit [DBG] from='client.? 192.168.123.102:0/1140312265' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-08T23:22:49.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:48 vm10 bash[20034]: audit 2026-03-08T23:22:48.846383+0000 mon.a (mon.0) 738 : audit [DBG] from='client.? 192.168.123.102:0/1821380888' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-08T23:22:49.408 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:48 vm10 bash[20034]: audit 2026-03-08T23:22:48.846383+0000 mon.a (mon.0) 738 : audit [DBG] from='client.? 192.168.123.102:0/1821380888' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-08T23:22:50.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:49 vm02 bash[17457]: cluster 2026-03-08T23:22:48.224001+0000 mgr.x (mgr.14150) 289 : cluster [DBG] pgmap v252: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:50.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:49 vm02 bash[17457]: cluster 2026-03-08T23:22:48.224001+0000 mgr.x (mgr.14150) 289 : cluster [DBG] pgmap v252: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:50.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:49 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:22:50.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:49 vm04 bash[19918]: cluster 2026-03-08T23:22:48.224001+0000 mgr.x (mgr.14150) 289 : cluster [DBG] pgmap v252: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:50.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:49 vm04 bash[19918]: cluster 2026-03-08T23:22:48.224001+0000 mgr.x (mgr.14150) 289 : cluster [DBG] pgmap v252: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:50.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:49 vm10 bash[20034]: cluster 2026-03-08T23:22:48.224001+0000 mgr.x (mgr.14150) 289 : cluster [DBG] pgmap v252: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:50.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:49 vm10 bash[20034]: cluster 2026-03-08T23:22:48.224001+0000 mgr.x (mgr.14150) 289 : cluster [DBG] pgmap v252: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:51.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:50 vm04 bash[19918]: audit 2026-03-08T23:22:49.964241+0000 mgr.x (mgr.14150) 290 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:51.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:50 vm04 bash[19918]: audit 2026-03-08T23:22:49.964241+0000 mgr.x (mgr.14150) 290 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:51.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:50 vm02 bash[17457]: audit 2026-03-08T23:22:49.964241+0000 mgr.x (mgr.14150) 290 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:51.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:50 vm02 bash[17457]: audit 2026-03-08T23:22:49.964241+0000 mgr.x (mgr.14150) 290 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:51.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:50 vm10 bash[20034]: audit 2026-03-08T23:22:49.964241+0000 mgr.x (mgr.14150) 290 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:51.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:50 vm10 bash[20034]: audit 2026-03-08T23:22:49.964241+0000 mgr.x (mgr.14150) 290 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:51.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:22:50 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:22:52.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:51 vm04 bash[19918]: cluster 2026-03-08T23:22:50.224287+0000 mgr.x (mgr.14150) 291 : cluster [DBG] pgmap v253: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:52.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:51 vm04 bash[19918]: cluster 2026-03-08T23:22:50.224287+0000 mgr.x (mgr.14150) 291 : cluster [DBG] pgmap v253: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:52.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:51 vm04 bash[19918]: audit 2026-03-08T23:22:50.910293+0000 mgr.x (mgr.14150) 292 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:52.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:51 vm04 bash[19918]: audit 2026-03-08T23:22:50.910293+0000 mgr.x (mgr.14150) 292 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:52.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:51 vm02 bash[17457]: cluster 2026-03-08T23:22:50.224287+0000 mgr.x (mgr.14150) 291 : cluster [DBG] pgmap v253: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:52.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:51 vm02 bash[17457]: cluster 2026-03-08T23:22:50.224287+0000 mgr.x (mgr.14150) 291 : cluster [DBG] pgmap v253: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:52.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:51 vm02 bash[17457]: audit 2026-03-08T23:22:50.910293+0000 mgr.x (mgr.14150) 292 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:52.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:51 vm02 bash[17457]: audit 2026-03-08T23:22:50.910293+0000 mgr.x (mgr.14150) 292 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:52.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:51 vm10 bash[20034]: cluster 2026-03-08T23:22:50.224287+0000 mgr.x (mgr.14150) 291 : cluster [DBG] pgmap v253: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:52.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:51 vm10 bash[20034]: cluster 2026-03-08T23:22:50.224287+0000 mgr.x (mgr.14150) 291 : cluster [DBG] pgmap v253: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:22:52.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:51 vm10 bash[20034]: audit 2026-03-08T23:22:50.910293+0000 mgr.x (mgr.14150) 292 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:52.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:51 vm10 bash[20034]: audit 2026-03-08T23:22:50.910293+0000 mgr.x (mgr.14150) 292 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:22:53.687 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:53.944 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:22:53.944 INFO:teuthology.orchestra.run.vm02.stderr:dumped all 2026-03-08T23:22:53.997 INFO:teuthology.orchestra.run.vm02.stdout:{"pg_ready":true,"pg_map":{"version":254,"stamp":"2026-03-08T23:22:52.224450+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459688,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":131,"num_read_kb":116,"num_write":63,"num_write_kb":587,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":4,"num_bytes_recovered":918560,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":41,"ondisk_log_size":41,"up":12,"acting":12,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":12,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":6,"kb":167739392,"kb_used":221352,"kb_used_data":6596,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518040,"statfs":{"total":171765137408,"available":171538472960,"internally_reserved":0,"allocated":6754304,"data_stored":3663392,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12711,"internal_metadata":219663961},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":20,"num_read_kb":20,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001988"},"pg_stats":[{"pgid":"2.2","version":"54'2","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-08T23:22:20.337394+0000","last_change":"2026-03-08T23:22:03.499754+0000","last_active":"2026-03-08T23:22:20.337394+0000","last_peered":"2026-03-08T23:22:20.337394+0000","last_clean":"2026-03-08T23:22:20.337394+0000","last_became_active":"2026-03-08T23:21:57.473946+0000","last_became_peered":"2026-03-08T23:21:57.473946+0000","last_unstale":"2026-03-08T23:22:20.337394+0000","last_undegraded":"2026-03-08T23:22:20.337394+0000","last_fullsized":"2026-03-08T23:22:20.337394+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:21:56.367628+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:21:56.367628+0000","last_clean_scrub_stamp":"2026-03-08T23:21:56.367628+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T05:08:56.554124+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00048285500000000002,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,2],"acting":[3,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1","version":"52'1","reported_seq":48,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-08T23:22:19.803693+0000","last_change":"2026-03-08T23:22:03.502276+0000","last_active":"2026-03-08T23:22:19.803693+0000","last_peered":"2026-03-08T23:22:19.803693+0000","last_clean":"2026-03-08T23:22:19.803693+0000","last_became_active":"2026-03-08T23:21:57.472941+0000","last_became_peered":"2026-03-08T23:21:57.472941+0000","last_unstale":"2026-03-08T23:22:19.803693+0000","last_undegraded":"2026-03-08T23:22:19.803693+0000","last_fullsized":"2026-03-08T23:22:19.803693+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:21:56.367628+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:21:56.367628+0000","last_clean_scrub_stamp":"2026-03-08T23:21:56.367628+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:32:25.801006+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00019244999999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"2.0","version":"56'6","reported_seq":137,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-08T23:22:48.922100+0000","last_change":"2026-03-08T23:22:03.499593+0000","last_active":"2026-03-08T23:22:48.922100+0000","last_peered":"2026-03-08T23:22:48.922100+0000","last_clean":"2026-03-08T23:22:48.922100+0000","last_became_active":"2026-03-08T23:21:57.472799+0000","last_became_peered":"2026-03-08T23:21:57.472799+0000","last_unstale":"2026-03-08T23:22:48.922100+0000","last_undegraded":"2026-03-08T23:22:48.922100+0000","last_fullsized":"2026-03-08T23:22:48.922100+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:21:56.367628+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:21:56.367628+0000","last_clean_scrub_stamp":"2026-03-08T23:21:56.367628+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T05:40:25.974173+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00028157699999999998,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":85,"num_read_kb":79,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"1.0","version":"20'32","reported_seq":104,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-08T23:22:20.337345+0000","last_change":"2026-03-08T23:20:33.833950+0000","last_active":"2026-03-08T23:22:20.337345+0000","last_peered":"2026-03-08T23:22:20.337345+0000","last_clean":"2026-03-08T23:22:20.337345+0000","last_became_active":"2026-03-08T23:20:33.525396+0000","last_became_peered":"2026-03-08T23:20:33.525396+0000","last_unstale":"2026-03-08T23:22:20.337345+0000","last_undegraded":"2026-03-08T23:22:20.337345+0000","last_fullsized":"2026-03-08T23:22:20.337345+0000","mapping_epoch":37,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":38,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:18:54.707925+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:18:54.707925+0000","last_clean_scrub_stamp":"2026-03-08T23:18:54.707925+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T11:11:56.402730+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":4,"num_bytes_recovered":918560,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,2],"acting":[3,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":85,"num_read_kb":79,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":9,"ondisk_log_size":9,"up":9,"acting":9,"num_store_stats":6},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":4,"num_bytes_recovered":918560,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1851392,"data_stored":1837120,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":5}],"osd_stats":[{"osd":7,"up_from":47,"seq":201863462928,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27496,"kb_used_data":652,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939928,"statfs":{"total":21470642176,"available":21442486272,"internally_reserved":0,"allocated":667648,"data_stored":285541,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":42,"seq":180388626455,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27500,"kb_used_data":656,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":671744,"data_stored":285930,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":36,"seq":154618822685,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27956,"kb_used_data":1108,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939468,"statfs":{"total":21470642176,"available":21442015232,"internally_reserved":0,"allocated":1134592,"data_stored":745210,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":31,"seq":133143986213,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27496,"kb_used_data":652,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939928,"statfs":{"total":21470642176,"available":21442486272,"internally_reserved":0,"allocated":667648,"data_stored":285541,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":25,"seq":107374182442,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27960,"kb_used_data":1112,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939464,"statfs":{"total":21470642176,"available":21442011136,"internally_reserved":0,"allocated":1138688,"data_stored":745229,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411377,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27948,"kb_used_data":1108,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939476,"statfs":{"total":21470642176,"available":21442023424,"internally_reserved":0,"allocated":1134592,"data_stored":744840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574903,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27504,"kb_used_data":656,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939920,"statfs":{"total":21470642176,"available":21442478080,"internally_reserved":0,"allocated":671744,"data_stored":285560,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738430,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27492,"kb_used_data":652,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939932,"statfs":{"total":21470642176,"available":21442490368,"internally_reserved":0,"allocated":667648,"data_stored":285541,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":408,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-08T23:22:53.997 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph pg dump --format=json 2026-03-08T23:22:54.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:53 vm04 bash[19918]: cluster 2026-03-08T23:22:52.224597+0000 mgr.x (mgr.14150) 293 : cluster [DBG] pgmap v254: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:54.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:53 vm04 bash[19918]: cluster 2026-03-08T23:22:52.224597+0000 mgr.x (mgr.14150) 293 : cluster [DBG] pgmap v254: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:54.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:53 vm02 bash[17457]: cluster 2026-03-08T23:22:52.224597+0000 mgr.x (mgr.14150) 293 : cluster [DBG] pgmap v254: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:54.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:53 vm02 bash[17457]: cluster 2026-03-08T23:22:52.224597+0000 mgr.x (mgr.14150) 293 : cluster [DBG] pgmap v254: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:54.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:53 vm10 bash[20034]: cluster 2026-03-08T23:22:52.224597+0000 mgr.x (mgr.14150) 293 : cluster [DBG] pgmap v254: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:54.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:53 vm10 bash[20034]: cluster 2026-03-08T23:22:52.224597+0000 mgr.x (mgr.14150) 293 : cluster [DBG] pgmap v254: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:55.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:54 vm04 bash[19918]: audit 2026-03-08T23:22:53.944523+0000 mgr.x (mgr.14150) 294 : audit [DBG] from='client.24503 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:22:55.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:54 vm04 bash[19918]: audit 2026-03-08T23:22:53.944523+0000 mgr.x (mgr.14150) 294 : audit [DBG] from='client.24503 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:22:55.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:54 vm02 bash[17457]: audit 2026-03-08T23:22:53.944523+0000 mgr.x (mgr.14150) 294 : audit [DBG] from='client.24503 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:22:55.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:54 vm02 bash[17457]: audit 2026-03-08T23:22:53.944523+0000 mgr.x (mgr.14150) 294 : audit [DBG] from='client.24503 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:22:55.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:54 vm10 bash[20034]: audit 2026-03-08T23:22:53.944523+0000 mgr.x (mgr.14150) 294 : audit [DBG] from='client.24503 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:22:55.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:54 vm10 bash[20034]: audit 2026-03-08T23:22:53.944523+0000 mgr.x (mgr.14150) 294 : audit [DBG] from='client.24503 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:22:56.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:55 vm04 bash[19918]: cluster 2026-03-08T23:22:54.225048+0000 mgr.x (mgr.14150) 295 : cluster [DBG] pgmap v255: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:56.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:55 vm04 bash[19918]: cluster 2026-03-08T23:22:54.225048+0000 mgr.x (mgr.14150) 295 : cluster [DBG] pgmap v255: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:56.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:55 vm02 bash[17457]: cluster 2026-03-08T23:22:54.225048+0000 mgr.x (mgr.14150) 295 : cluster [DBG] pgmap v255: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:56.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:55 vm02 bash[17457]: cluster 2026-03-08T23:22:54.225048+0000 mgr.x (mgr.14150) 295 : cluster [DBG] pgmap v255: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:56.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:55 vm10 bash[20034]: cluster 2026-03-08T23:22:54.225048+0000 mgr.x (mgr.14150) 295 : cluster [DBG] pgmap v255: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:56.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:55 vm10 bash[20034]: cluster 2026-03-08T23:22:54.225048+0000 mgr.x (mgr.14150) 295 : cluster [DBG] pgmap v255: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:22:57.703 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:22:57.960 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:22:57.960 INFO:teuthology.orchestra.run.vm02.stderr:dumped all 2026-03-08T23:22:58.007 INFO:teuthology.orchestra.run.vm02.stdout:{"pg_ready":true,"pg_map":{"version":256,"stamp":"2026-03-08T23:22:56.225391+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459688,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":141,"num_read_kb":126,"num_write":63,"num_write_kb":587,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":4,"num_bytes_recovered":918560,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":41,"ondisk_log_size":41,"up":12,"acting":12,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":12,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":6,"kb":167739392,"kb_used":221352,"kb_used_data":6596,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518040,"statfs":{"total":171765137408,"available":171538472960,"internally_reserved":0,"allocated":6754304,"data_stored":3663392,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12711,"internal_metadata":219663961},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":22,"num_read_kb":22,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.002425"},"pg_stats":[{"pgid":"2.2","version":"54'2","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-08T23:22:20.337394+0000","last_change":"2026-03-08T23:22:03.499754+0000","last_active":"2026-03-08T23:22:20.337394+0000","last_peered":"2026-03-08T23:22:20.337394+0000","last_clean":"2026-03-08T23:22:20.337394+0000","last_became_active":"2026-03-08T23:21:57.473946+0000","last_became_peered":"2026-03-08T23:21:57.473946+0000","last_unstale":"2026-03-08T23:22:20.337394+0000","last_undegraded":"2026-03-08T23:22:20.337394+0000","last_fullsized":"2026-03-08T23:22:20.337394+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:21:56.367628+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:21:56.367628+0000","last_clean_scrub_stamp":"2026-03-08T23:21:56.367628+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T05:08:56.554124+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00048285500000000002,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,2],"acting":[3,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1","version":"52'1","reported_seq":48,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-08T23:22:19.803693+0000","last_change":"2026-03-08T23:22:03.502276+0000","last_active":"2026-03-08T23:22:19.803693+0000","last_peered":"2026-03-08T23:22:19.803693+0000","last_clean":"2026-03-08T23:22:19.803693+0000","last_became_active":"2026-03-08T23:21:57.472941+0000","last_became_peered":"2026-03-08T23:21:57.472941+0000","last_unstale":"2026-03-08T23:22:19.803693+0000","last_undegraded":"2026-03-08T23:22:19.803693+0000","last_fullsized":"2026-03-08T23:22:19.803693+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:21:56.367628+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:21:56.367628+0000","last_clean_scrub_stamp":"2026-03-08T23:21:56.367628+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:32:25.801006+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00019244999999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"2.0","version":"56'6","reported_seq":147,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-08T23:22:53.929756+0000","last_change":"2026-03-08T23:22:03.499593+0000","last_active":"2026-03-08T23:22:53.929756+0000","last_peered":"2026-03-08T23:22:53.929756+0000","last_clean":"2026-03-08T23:22:53.929756+0000","last_became_active":"2026-03-08T23:21:57.472799+0000","last_became_peered":"2026-03-08T23:21:57.472799+0000","last_unstale":"2026-03-08T23:22:53.929756+0000","last_undegraded":"2026-03-08T23:22:53.929756+0000","last_fullsized":"2026-03-08T23:22:53.929756+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:21:56.367628+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:21:56.367628+0000","last_clean_scrub_stamp":"2026-03-08T23:21:56.367628+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T05:40:25.974173+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00028157699999999998,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":95,"num_read_kb":89,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"1.0","version":"20'32","reported_seq":104,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-08T23:22:20.337345+0000","last_change":"2026-03-08T23:20:33.833950+0000","last_active":"2026-03-08T23:22:20.337345+0000","last_peered":"2026-03-08T23:22:20.337345+0000","last_clean":"2026-03-08T23:22:20.337345+0000","last_became_active":"2026-03-08T23:20:33.525396+0000","last_became_peered":"2026-03-08T23:20:33.525396+0000","last_unstale":"2026-03-08T23:22:20.337345+0000","last_undegraded":"2026-03-08T23:22:20.337345+0000","last_fullsized":"2026-03-08T23:22:20.337345+0000","mapping_epoch":37,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":38,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:18:54.707925+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:18:54.707925+0000","last_clean_scrub_stamp":"2026-03-08T23:18:54.707925+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T11:11:56.402730+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":4,"num_bytes_recovered":918560,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,2],"acting":[3,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":95,"num_read_kb":89,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":9,"ondisk_log_size":9,"up":9,"acting":9,"num_store_stats":6},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":4,"num_bytes_recovered":918560,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1851392,"data_stored":1837120,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":5}],"osd_stats":[{"osd":7,"up_from":47,"seq":201863462929,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27496,"kb_used_data":652,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939928,"statfs":{"total":21470642176,"available":21442486272,"internally_reserved":0,"allocated":667648,"data_stored":285541,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":42,"seq":180388626456,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27500,"kb_used_data":656,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":671744,"data_stored":285930,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":36,"seq":154618822686,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27956,"kb_used_data":1108,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939468,"statfs":{"total":21470642176,"available":21442015232,"internally_reserved":0,"allocated":1134592,"data_stored":745210,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":31,"seq":133143986213,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27496,"kb_used_data":652,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939928,"statfs":{"total":21470642176,"available":21442486272,"internally_reserved":0,"allocated":667648,"data_stored":285541,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":25,"seq":107374182443,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27960,"kb_used_data":1112,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939464,"statfs":{"total":21470642176,"available":21442011136,"internally_reserved":0,"allocated":1138688,"data_stored":745229,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411378,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27948,"kb_used_data":1108,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939476,"statfs":{"total":21470642176,"available":21442023424,"internally_reserved":0,"allocated":1134592,"data_stored":744840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574904,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27504,"kb_used_data":656,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939920,"statfs":{"total":21470642176,"available":21442478080,"internally_reserved":0,"allocated":671744,"data_stored":285560,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738431,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27492,"kb_used_data":652,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939932,"statfs":{"total":21470642176,"available":21442490368,"internally_reserved":0,"allocated":667648,"data_stored":285541,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":408,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-08T23:22:58.008 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-08T23:22:58.008 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-08T23:22:58.008 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-08T23:22:58.008 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph health --format=json 2026-03-08T23:22:58.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:58 vm04 bash[19918]: cluster 2026-03-08T23:22:56.225553+0000 mgr.x (mgr.14150) 296 : cluster [DBG] pgmap v256: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-08T23:22:58.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:58 vm04 bash[19918]: cluster 2026-03-08T23:22:56.225553+0000 mgr.x (mgr.14150) 296 : cluster [DBG] pgmap v256: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-08T23:22:58.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:58 vm02 bash[17457]: cluster 2026-03-08T23:22:56.225553+0000 mgr.x (mgr.14150) 296 : cluster [DBG] pgmap v256: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-08T23:22:58.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:58 vm02 bash[17457]: cluster 2026-03-08T23:22:56.225553+0000 mgr.x (mgr.14150) 296 : cluster [DBG] pgmap v256: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-08T23:22:58.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:58 vm10 bash[20034]: cluster 2026-03-08T23:22:56.225553+0000 mgr.x (mgr.14150) 296 : cluster [DBG] pgmap v256: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-08T23:22:58.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:58 vm10 bash[20034]: cluster 2026-03-08T23:22:56.225553+0000 mgr.x (mgr.14150) 296 : cluster [DBG] pgmap v256: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.8 KiB/s rd, 1 op/s 2026-03-08T23:22:59.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:59 vm04 bash[19918]: audit 2026-03-08T23:22:57.959743+0000 mgr.x (mgr.14150) 297 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:22:59.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:22:59 vm04 bash[19918]: audit 2026-03-08T23:22:57.959743+0000 mgr.x (mgr.14150) 297 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:22:59.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:59 vm02 bash[17457]: audit 2026-03-08T23:22:57.959743+0000 mgr.x (mgr.14150) 297 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:22:59.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:22:59 vm02 bash[17457]: audit 2026-03-08T23:22:57.959743+0000 mgr.x (mgr.14150) 297 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:22:59.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:59 vm10 bash[20034]: audit 2026-03-08T23:22:57.959743+0000 mgr.x (mgr.14150) 297 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:22:59.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:22:59 vm10 bash[20034]: audit 2026-03-08T23:22:57.959743+0000 mgr.x (mgr.14150) 297 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:23:00.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:00 vm04 bash[19918]: cluster 2026-03-08T23:22:58.225858+0000 mgr.x (mgr.14150) 298 : cluster [DBG] pgmap v257: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:00.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:00 vm04 bash[19918]: cluster 2026-03-08T23:22:58.225858+0000 mgr.x (mgr.14150) 298 : cluster [DBG] pgmap v257: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:00.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:00 vm02 bash[17457]: cluster 2026-03-08T23:22:58.225858+0000 mgr.x (mgr.14150) 298 : cluster [DBG] pgmap v257: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:00.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:00 vm02 bash[17457]: cluster 2026-03-08T23:22:58.225858+0000 mgr.x (mgr.14150) 298 : cluster [DBG] pgmap v257: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:00.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:22:59 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:23:00.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:00 vm10 bash[20034]: cluster 2026-03-08T23:22:58.225858+0000 mgr.x (mgr.14150) 298 : cluster [DBG] pgmap v257: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:00.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:00 vm10 bash[20034]: cluster 2026-03-08T23:22:58.225858+0000 mgr.x (mgr.14150) 298 : cluster [DBG] pgmap v257: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:01.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:01 vm04 bash[19918]: audit 2026-03-08T23:22:59.974852+0000 mgr.x (mgr.14150) 299 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:01.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:01 vm04 bash[19918]: audit 2026-03-08T23:22:59.974852+0000 mgr.x (mgr.14150) 299 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:01.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:01 vm02 bash[17457]: audit 2026-03-08T23:22:59.974852+0000 mgr.x (mgr.14150) 299 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:01.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:01 vm02 bash[17457]: audit 2026-03-08T23:22:59.974852+0000 mgr.x (mgr.14150) 299 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:01.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:01 vm10 bash[20034]: audit 2026-03-08T23:22:59.974852+0000 mgr.x (mgr.14150) 299 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:01.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:01 vm10 bash[20034]: audit 2026-03-08T23:22:59.974852+0000 mgr.x (mgr.14150) 299 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:01.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:00 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:23:01.719 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:23:02.005 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:23:02.005 INFO:teuthology.orchestra.run.vm02.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-08T23:23:02.065 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-08T23:23:02.065 INFO:tasks.cephadm:Setup complete, yielding 2026-03-08T23:23:02.065 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-08T23:23:02.068 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm02.local 2026-03-08T23:23:02.068 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- bash -c 'ceph orch status' 2026-03-08T23:23:02.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:02 vm04 bash[19918]: cluster 2026-03-08T23:23:00.226109+0000 mgr.x (mgr.14150) 300 : cluster [DBG] pgmap v258: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:02.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:02 vm04 bash[19918]: cluster 2026-03-08T23:23:00.226109+0000 mgr.x (mgr.14150) 300 : cluster [DBG] pgmap v258: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:02.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:02 vm04 bash[19918]: audit 2026-03-08T23:23:00.920955+0000 mgr.x (mgr.14150) 301 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:02.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:02 vm04 bash[19918]: audit 2026-03-08T23:23:00.920955+0000 mgr.x (mgr.14150) 301 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:02.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:02 vm04 bash[19918]: audit 2026-03-08T23:23:02.005467+0000 mon.c (mon.1) 24 : audit [DBG] from='client.? 192.168.123.102:0/2161077864' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-08T23:23:02.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:02 vm04 bash[19918]: audit 2026-03-08T23:23:02.005467+0000 mon.c (mon.1) 24 : audit [DBG] from='client.? 192.168.123.102:0/2161077864' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-08T23:23:02.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:02 vm02 bash[17457]: cluster 2026-03-08T23:23:00.226109+0000 mgr.x (mgr.14150) 300 : cluster [DBG] pgmap v258: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:02.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:02 vm02 bash[17457]: cluster 2026-03-08T23:23:00.226109+0000 mgr.x (mgr.14150) 300 : cluster [DBG] pgmap v258: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:02.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:02 vm02 bash[17457]: audit 2026-03-08T23:23:00.920955+0000 mgr.x (mgr.14150) 301 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:02.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:02 vm02 bash[17457]: audit 2026-03-08T23:23:00.920955+0000 mgr.x (mgr.14150) 301 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:02.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:02 vm02 bash[17457]: audit 2026-03-08T23:23:02.005467+0000 mon.c (mon.1) 24 : audit [DBG] from='client.? 192.168.123.102:0/2161077864' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-08T23:23:02.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:02 vm02 bash[17457]: audit 2026-03-08T23:23:02.005467+0000 mon.c (mon.1) 24 : audit [DBG] from='client.? 192.168.123.102:0/2161077864' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-08T23:23:02.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:02 vm10 bash[20034]: cluster 2026-03-08T23:23:00.226109+0000 mgr.x (mgr.14150) 300 : cluster [DBG] pgmap v258: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:02.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:02 vm10 bash[20034]: cluster 2026-03-08T23:23:00.226109+0000 mgr.x (mgr.14150) 300 : cluster [DBG] pgmap v258: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:02.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:02 vm10 bash[20034]: audit 2026-03-08T23:23:00.920955+0000 mgr.x (mgr.14150) 301 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:02.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:02 vm10 bash[20034]: audit 2026-03-08T23:23:00.920955+0000 mgr.x (mgr.14150) 301 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:02.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:02 vm10 bash[20034]: audit 2026-03-08T23:23:02.005467+0000 mon.c (mon.1) 24 : audit [DBG] from='client.? 192.168.123.102:0/2161077864' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-08T23:23:02.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:02 vm10 bash[20034]: audit 2026-03-08T23:23:02.005467+0000 mon.c (mon.1) 24 : audit [DBG] from='client.? 192.168.123.102:0/2161077864' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-08T23:23:04.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:04 vm04 bash[19918]: cluster 2026-03-08T23:23:02.226341+0000 mgr.x (mgr.14150) 302 : cluster [DBG] pgmap v259: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:04.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:04 vm04 bash[19918]: cluster 2026-03-08T23:23:02.226341+0000 mgr.x (mgr.14150) 302 : cluster [DBG] pgmap v259: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:04.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:04 vm02 bash[17457]: cluster 2026-03-08T23:23:02.226341+0000 mgr.x (mgr.14150) 302 : cluster [DBG] pgmap v259: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:04.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:04 vm02 bash[17457]: cluster 2026-03-08T23:23:02.226341+0000 mgr.x (mgr.14150) 302 : cluster [DBG] pgmap v259: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:04.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:04 vm10 bash[20034]: cluster 2026-03-08T23:23:02.226341+0000 mgr.x (mgr.14150) 302 : cluster [DBG] pgmap v259: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:04.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:04 vm10 bash[20034]: cluster 2026-03-08T23:23:02.226341+0000 mgr.x (mgr.14150) 302 : cluster [DBG] pgmap v259: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:05.735 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:23:06.001 INFO:teuthology.orchestra.run.vm02.stdout:Backend: cephadm 2026-03-08T23:23:06.001 INFO:teuthology.orchestra.run.vm02.stdout:Available: Yes 2026-03-08T23:23:06.001 INFO:teuthology.orchestra.run.vm02.stdout:Paused: No 2026-03-08T23:23:06.055 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- bash -c 'ceph orch ps' 2026-03-08T23:23:06.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:06 vm04 bash[19918]: cluster 2026-03-08T23:23:04.226570+0000 mgr.x (mgr.14150) 303 : cluster [DBG] pgmap v260: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:06.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:06 vm04 bash[19918]: cluster 2026-03-08T23:23:04.226570+0000 mgr.x (mgr.14150) 303 : cluster [DBG] pgmap v260: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:06.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:06 vm02 bash[17457]: cluster 2026-03-08T23:23:04.226570+0000 mgr.x (mgr.14150) 303 : cluster [DBG] pgmap v260: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:06.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:06 vm02 bash[17457]: cluster 2026-03-08T23:23:04.226570+0000 mgr.x (mgr.14150) 303 : cluster [DBG] pgmap v260: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:06.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:06 vm10 bash[20034]: cluster 2026-03-08T23:23:04.226570+0000 mgr.x (mgr.14150) 303 : cluster [DBG] pgmap v260: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:06.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:06 vm10 bash[20034]: cluster 2026-03-08T23:23:04.226570+0000 mgr.x (mgr.14150) 303 : cluster [DBG] pgmap v260: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:07.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:07 vm04 bash[19918]: audit 2026-03-08T23:23:06.001788+0000 mgr.x (mgr.14150) 304 : audit [DBG] from='client.14667 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:07.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:07 vm04 bash[19918]: audit 2026-03-08T23:23:06.001788+0000 mgr.x (mgr.14150) 304 : audit [DBG] from='client.14667 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:07.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:07 vm02 bash[17457]: audit 2026-03-08T23:23:06.001788+0000 mgr.x (mgr.14150) 304 : audit [DBG] from='client.14667 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:07.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:07 vm02 bash[17457]: audit 2026-03-08T23:23:06.001788+0000 mgr.x (mgr.14150) 304 : audit [DBG] from='client.14667 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:07.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:07 vm10 bash[20034]: audit 2026-03-08T23:23:06.001788+0000 mgr.x (mgr.14150) 304 : audit [DBG] from='client.14667 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:07.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:07 vm10 bash[20034]: audit 2026-03-08T23:23:06.001788+0000 mgr.x (mgr.14150) 304 : audit [DBG] from='client.14667 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:08.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:08 vm04 bash[19918]: cluster 2026-03-08T23:23:06.226855+0000 mgr.x (mgr.14150) 305 : cluster [DBG] pgmap v261: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:08.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:08 vm04 bash[19918]: cluster 2026-03-08T23:23:06.226855+0000 mgr.x (mgr.14150) 305 : cluster [DBG] pgmap v261: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:08 vm02 bash[17457]: cluster 2026-03-08T23:23:06.226855+0000 mgr.x (mgr.14150) 305 : cluster [DBG] pgmap v261: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:08.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:08 vm02 bash[17457]: cluster 2026-03-08T23:23:06.226855+0000 mgr.x (mgr.14150) 305 : cluster [DBG] pgmap v261: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:08.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:08 vm10 bash[20034]: cluster 2026-03-08T23:23:06.226855+0000 mgr.x (mgr.14150) 305 : cluster [DBG] pgmap v261: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:08.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:08 vm10 bash[20034]: cluster 2026-03-08T23:23:06.226855+0000 mgr.x (mgr.14150) 305 : cluster [DBG] pgmap v261: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:09.748 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:23:10.004 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-08T23:23:10.004 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.iscsi.a vm02 *:5000 running (60s) 54s ago 60s 67.5M - 3.9 654f31e6858e 56ea784b496d 2026-03-08T23:23:10.004 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.iscsi.b vm10 *:5000 running (59s) 54s ago 59s 47.3M - 3.9 654f31e6858e 80eaa2168c9b 2026-03-08T23:23:10.004 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm02 *:9283,8765 running (7m) 54s ago 7m 522M - 19.2.3-678-ge911bdeb 654f31e6858e 2eb71067bd81 2026-03-08T23:23:10.004 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (7m) 54s ago 7m 44.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7bd61b16fc43 2026-03-08T23:23:10.004 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm04 running (6m) 3m ago 6m 35.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 5da79f66c075 2026-03-08T23:23:10.004 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm10 running (6m) 54s ago 6m 39.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 878a7d718fe4 2026-03-08T23:23:10.004 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (5m) 54s ago 5m 38.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 12a21f069ac2 2026-03-08T23:23:10.004 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (4m) 54s ago 4m 58.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e e0e02d63ee13 2026-03-08T23:23:10.004 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm04 running (4m) 3m ago 4m 35.7M 1517M 19.2.3-678-ge911bdeb 654f31e6858e cacb23d8ecec 2026-03-08T23:23:10.004 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm04 running (3m) 3m ago 3m 31.0M 1517M 19.2.3-678-ge911bdeb 654f31e6858e dc84f5abb240 2026-03-08T23:23:10.004 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm04 running (3m) 3m ago 3m 22.3M 1517M 19.2.3-678-ge911bdeb 654f31e6858e 0cb9682993cb 2026-03-08T23:23:10.004 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm10 running (2m) 54s ago 2m 36.9M 1517M 19.2.3-678-ge911bdeb 654f31e6858e 34dfbb0cf812 2026-03-08T23:23:10.004 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm10 running (2m) 54s ago 2m 35.5M 1517M 19.2.3-678-ge911bdeb 654f31e6858e 6e837ca590c6 2026-03-08T23:23:10.004 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm10 running (98s) 54s ago 99s 32.5M 1517M 19.2.3-678-ge911bdeb 654f31e6858e f1108be5855f 2026-03-08T23:23:10.015 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:09 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:23:10.060 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- bash -c 'ceph orch ls' 2026-03-08T23:23:10.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:10 vm04 bash[19918]: cluster 2026-03-08T23:23:08.227144+0000 mgr.x (mgr.14150) 306 : cluster [DBG] pgmap v262: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:10.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:10 vm04 bash[19918]: cluster 2026-03-08T23:23:08.227144+0000 mgr.x (mgr.14150) 306 : cluster [DBG] pgmap v262: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:10.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:10 vm02 bash[17457]: cluster 2026-03-08T23:23:08.227144+0000 mgr.x (mgr.14150) 306 : cluster [DBG] pgmap v262: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:10.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:10 vm02 bash[17457]: cluster 2026-03-08T23:23:08.227144+0000 mgr.x (mgr.14150) 306 : cluster [DBG] pgmap v262: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:10.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:10 vm10 bash[20034]: cluster 2026-03-08T23:23:08.227144+0000 mgr.x (mgr.14150) 306 : cluster [DBG] pgmap v262: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:10.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:10 vm10 bash[20034]: cluster 2026-03-08T23:23:08.227144+0000 mgr.x (mgr.14150) 306 : cluster [DBG] pgmap v262: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:11.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:11 vm04 bash[19918]: audit 2026-03-08T23:23:09.985682+0000 mgr.x (mgr.14150) 307 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:11.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:11 vm04 bash[19918]: audit 2026-03-08T23:23:09.985682+0000 mgr.x (mgr.14150) 307 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:11.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:11 vm04 bash[19918]: audit 2026-03-08T23:23:10.000350+0000 mgr.x (mgr.14150) 308 : audit [DBG] from='client.14673 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:11.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:11 vm04 bash[19918]: audit 2026-03-08T23:23:10.000350+0000 mgr.x (mgr.14150) 308 : audit [DBG] from='client.14673 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:11 vm02 bash[17457]: audit 2026-03-08T23:23:09.985682+0000 mgr.x (mgr.14150) 307 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:11 vm02 bash[17457]: audit 2026-03-08T23:23:09.985682+0000 mgr.x (mgr.14150) 307 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:11 vm02 bash[17457]: audit 2026-03-08T23:23:10.000350+0000 mgr.x (mgr.14150) 308 : audit [DBG] from='client.14673 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:11.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:11 vm02 bash[17457]: audit 2026-03-08T23:23:10.000350+0000 mgr.x (mgr.14150) 308 : audit [DBG] from='client.14673 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:11 vm10 bash[20034]: audit 2026-03-08T23:23:09.985682+0000 mgr.x (mgr.14150) 307 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:11 vm10 bash[20034]: audit 2026-03-08T23:23:09.985682+0000 mgr.x (mgr.14150) 307 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:11 vm10 bash[20034]: audit 2026-03-08T23:23:10.000350+0000 mgr.x (mgr.14150) 308 : audit [DBG] from='client.14673 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:11 vm10 bash[20034]: audit 2026-03-08T23:23:10.000350+0000 mgr.x (mgr.14150) 308 : audit [DBG] from='client.14673 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:11.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:10 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:23:12.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:12 vm04 bash[19918]: cluster 2026-03-08T23:23:10.227464+0000 mgr.x (mgr.14150) 309 : cluster [DBG] pgmap v263: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:12.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:12 vm04 bash[19918]: cluster 2026-03-08T23:23:10.227464+0000 mgr.x (mgr.14150) 309 : cluster [DBG] pgmap v263: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:12.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:12 vm04 bash[19918]: audit 2026-03-08T23:23:10.922123+0000 mgr.x (mgr.14150) 310 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:12.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:12 vm04 bash[19918]: audit 2026-03-08T23:23:10.922123+0000 mgr.x (mgr.14150) 310 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:12.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:12 vm02 bash[17457]: cluster 2026-03-08T23:23:10.227464+0000 mgr.x (mgr.14150) 309 : cluster [DBG] pgmap v263: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:12.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:12 vm02 bash[17457]: cluster 2026-03-08T23:23:10.227464+0000 mgr.x (mgr.14150) 309 : cluster [DBG] pgmap v263: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:12.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:12 vm02 bash[17457]: audit 2026-03-08T23:23:10.922123+0000 mgr.x (mgr.14150) 310 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:12.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:12 vm02 bash[17457]: audit 2026-03-08T23:23:10.922123+0000 mgr.x (mgr.14150) 310 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:12.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:12 vm10 bash[20034]: cluster 2026-03-08T23:23:10.227464+0000 mgr.x (mgr.14150) 309 : cluster [DBG] pgmap v263: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:12.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:12 vm10 bash[20034]: cluster 2026-03-08T23:23:10.227464+0000 mgr.x (mgr.14150) 309 : cluster [DBG] pgmap v263: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:12.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:12 vm10 bash[20034]: audit 2026-03-08T23:23:10.922123+0000 mgr.x (mgr.14150) 310 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:12.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:12 vm10 bash[20034]: audit 2026-03-08T23:23:10.922123+0000 mgr.x (mgr.14150) 310 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:13.762 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:23:13.998 INFO:teuthology.orchestra.run.vm02.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-08T23:23:13.999 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.datapool ?:5000 2/2 58s ago 65s vm02=iscsi.a;vm10=iscsi.b;count:2 2026-03-08T23:23:13.999 INFO:teuthology.orchestra.run.vm02.stdout:mgr 1/1 58s ago 6m vm02=x;count:1 2026-03-08T23:23:13.999 INFO:teuthology.orchestra.run.vm02.stdout:mon 3/3 3m ago 6m vm02:192.168.123.102=a;vm04:192.168.123.104=b;vm10:192.168.123.110=c;count:3 2026-03-08T23:23:13.999 INFO:teuthology.orchestra.run.vm02.stdout:osd 8 3m ago - 2026-03-08T23:23:14.045 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- bash -c 'ceph orch host ls' 2026-03-08T23:23:14.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:14 vm04 bash[19918]: cluster 2026-03-08T23:23:12.227693+0000 mgr.x (mgr.14150) 311 : cluster [DBG] pgmap v264: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:14.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:14 vm04 bash[19918]: cluster 2026-03-08T23:23:12.227693+0000 mgr.x (mgr.14150) 311 : cluster [DBG] pgmap v264: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:14.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:14 vm02 bash[17457]: cluster 2026-03-08T23:23:12.227693+0000 mgr.x (mgr.14150) 311 : cluster [DBG] pgmap v264: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:14.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:14 vm02 bash[17457]: cluster 2026-03-08T23:23:12.227693+0000 mgr.x (mgr.14150) 311 : cluster [DBG] pgmap v264: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:14.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:14 vm10 bash[20034]: cluster 2026-03-08T23:23:12.227693+0000 mgr.x (mgr.14150) 311 : cluster [DBG] pgmap v264: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:14.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:14 vm10 bash[20034]: cluster 2026-03-08T23:23:12.227693+0000 mgr.x (mgr.14150) 311 : cluster [DBG] pgmap v264: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:15.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:15 vm04 bash[19918]: audit 2026-03-08T23:23:13.997937+0000 mgr.x (mgr.14150) 312 : audit [DBG] from='client.24518 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:15.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:15 vm04 bash[19918]: audit 2026-03-08T23:23:13.997937+0000 mgr.x (mgr.14150) 312 : audit [DBG] from='client.24518 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:15.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:15 vm02 bash[17457]: audit 2026-03-08T23:23:13.997937+0000 mgr.x (mgr.14150) 312 : audit [DBG] from='client.24518 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:15.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:15 vm02 bash[17457]: audit 2026-03-08T23:23:13.997937+0000 mgr.x (mgr.14150) 312 : audit [DBG] from='client.24518 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:15.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:15 vm10 bash[20034]: audit 2026-03-08T23:23:13.997937+0000 mgr.x (mgr.14150) 312 : audit [DBG] from='client.24518 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:15.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:15 vm10 bash[20034]: audit 2026-03-08T23:23:13.997937+0000 mgr.x (mgr.14150) 312 : audit [DBG] from='client.24518 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:16.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:16 vm04 bash[19918]: cluster 2026-03-08T23:23:14.227891+0000 mgr.x (mgr.14150) 313 : cluster [DBG] pgmap v265: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:16.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:16 vm04 bash[19918]: cluster 2026-03-08T23:23:14.227891+0000 mgr.x (mgr.14150) 313 : cluster [DBG] pgmap v265: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:16.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:16 vm04 bash[19918]: audit 2026-03-08T23:23:15.709513+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:23:16.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:16 vm04 bash[19918]: audit 2026-03-08T23:23:15.709513+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:23:16.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:16 vm04 bash[19918]: audit 2026-03-08T23:23:16.061345+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:23:16.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:16 vm04 bash[19918]: audit 2026-03-08T23:23:16.061345+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:23:16.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:16 vm04 bash[19918]: audit 2026-03-08T23:23:16.061928+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:23:16.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:16 vm04 bash[19918]: audit 2026-03-08T23:23:16.061928+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:23:16.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:16 vm04 bash[19918]: audit 2026-03-08T23:23:16.070111+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:23:16.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:16 vm04 bash[19918]: audit 2026-03-08T23:23:16.070111+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:23:16.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:16 vm02 bash[17457]: cluster 2026-03-08T23:23:14.227891+0000 mgr.x (mgr.14150) 313 : cluster [DBG] pgmap v265: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:16.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:16 vm02 bash[17457]: cluster 2026-03-08T23:23:14.227891+0000 mgr.x (mgr.14150) 313 : cluster [DBG] pgmap v265: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:16.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:16 vm02 bash[17457]: audit 2026-03-08T23:23:15.709513+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:23:16.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:16 vm02 bash[17457]: audit 2026-03-08T23:23:15.709513+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:23:16.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:16 vm02 bash[17457]: audit 2026-03-08T23:23:16.061345+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:23:16.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:16 vm02 bash[17457]: audit 2026-03-08T23:23:16.061345+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:23:16.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:16 vm02 bash[17457]: audit 2026-03-08T23:23:16.061928+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:23:16.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:16 vm02 bash[17457]: audit 2026-03-08T23:23:16.061928+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:23:16.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:16 vm02 bash[17457]: audit 2026-03-08T23:23:16.070111+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:23:16.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:16 vm02 bash[17457]: audit 2026-03-08T23:23:16.070111+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:23:16.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:16 vm10 bash[20034]: cluster 2026-03-08T23:23:14.227891+0000 mgr.x (mgr.14150) 313 : cluster [DBG] pgmap v265: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:16.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:16 vm10 bash[20034]: cluster 2026-03-08T23:23:14.227891+0000 mgr.x (mgr.14150) 313 : cluster [DBG] pgmap v265: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:16.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:16 vm10 bash[20034]: audit 2026-03-08T23:23:15.709513+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:23:16.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:16 vm10 bash[20034]: audit 2026-03-08T23:23:15.709513+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:23:16.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:16 vm10 bash[20034]: audit 2026-03-08T23:23:16.061345+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:23:16.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:16 vm10 bash[20034]: audit 2026-03-08T23:23:16.061345+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:23:16.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:16 vm10 bash[20034]: audit 2026-03-08T23:23:16.061928+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:23:16.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:16 vm10 bash[20034]: audit 2026-03-08T23:23:16.061928+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:23:16.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:16 vm10 bash[20034]: audit 2026-03-08T23:23:16.070111+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:23:16.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:16 vm10 bash[20034]: audit 2026-03-08T23:23:16.070111+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:23:17.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:17 vm04 bash[19918]: cluster 2026-03-08T23:23:16.228179+0000 mgr.x (mgr.14150) 314 : cluster [DBG] pgmap v266: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:17.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:17 vm04 bash[19918]: cluster 2026-03-08T23:23:16.228179+0000 mgr.x (mgr.14150) 314 : cluster [DBG] pgmap v266: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:17.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:17 vm02 bash[17457]: cluster 2026-03-08T23:23:16.228179+0000 mgr.x (mgr.14150) 314 : cluster [DBG] pgmap v266: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:17.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:17 vm02 bash[17457]: cluster 2026-03-08T23:23:16.228179+0000 mgr.x (mgr.14150) 314 : cluster [DBG] pgmap v266: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:17.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:17 vm10 bash[20034]: cluster 2026-03-08T23:23:16.228179+0000 mgr.x (mgr.14150) 314 : cluster [DBG] pgmap v266: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:17.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:17 vm10 bash[20034]: cluster 2026-03-08T23:23:16.228179+0000 mgr.x (mgr.14150) 314 : cluster [DBG] pgmap v266: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:17.780 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:23:18.023 INFO:teuthology.orchestra.run.vm02.stdout:HOST ADDR LABELS STATUS 2026-03-08T23:23:18.023 INFO:teuthology.orchestra.run.vm02.stdout:vm02 192.168.123.102 2026-03-08T23:23:18.024 INFO:teuthology.orchestra.run.vm02.stdout:vm04 192.168.123.104 2026-03-08T23:23:18.024 INFO:teuthology.orchestra.run.vm02.stdout:vm10 192.168.123.110 2026-03-08T23:23:18.024 INFO:teuthology.orchestra.run.vm02.stdout:3 hosts in cluster 2026-03-08T23:23:18.074 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- bash -c 'ceph orch device ls' 2026-03-08T23:23:18.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:18 vm02 bash[17457]: audit 2026-03-08T23:23:18.023588+0000 mgr.x (mgr.14150) 315 : audit [DBG] from='client.14685 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:18.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:18 vm02 bash[17457]: audit 2026-03-08T23:23:18.023588+0000 mgr.x (mgr.14150) 315 : audit [DBG] from='client.14685 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:18.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:18 vm04 bash[19918]: audit 2026-03-08T23:23:18.023588+0000 mgr.x (mgr.14150) 315 : audit [DBG] from='client.14685 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:18.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:18 vm04 bash[19918]: audit 2026-03-08T23:23:18.023588+0000 mgr.x (mgr.14150) 315 : audit [DBG] from='client.14685 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:18.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:18 vm10 bash[20034]: audit 2026-03-08T23:23:18.023588+0000 mgr.x (mgr.14150) 315 : audit [DBG] from='client.14685 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:18.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:18 vm10 bash[20034]: audit 2026-03-08T23:23:18.023588+0000 mgr.x (mgr.14150) 315 : audit [DBG] from='client.14685 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:19.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:19 vm04 bash[19918]: cluster 2026-03-08T23:23:18.228423+0000 mgr.x (mgr.14150) 316 : cluster [DBG] pgmap v267: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:19.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:19 vm04 bash[19918]: cluster 2026-03-08T23:23:18.228423+0000 mgr.x (mgr.14150) 316 : cluster [DBG] pgmap v267: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:19.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:19 vm02 bash[17457]: cluster 2026-03-08T23:23:18.228423+0000 mgr.x (mgr.14150) 316 : cluster [DBG] pgmap v267: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:19.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:19 vm02 bash[17457]: cluster 2026-03-08T23:23:18.228423+0000 mgr.x (mgr.14150) 316 : cluster [DBG] pgmap v267: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:19.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:19 vm10 bash[20034]: cluster 2026-03-08T23:23:18.228423+0000 mgr.x (mgr.14150) 316 : cluster [DBG] pgmap v267: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:19.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:19 vm10 bash[20034]: cluster 2026-03-08T23:23:18.228423+0000 mgr.x (mgr.14150) 316 : cluster [DBG] pgmap v267: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:20.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:20 vm02 bash[17457]: audit 2026-03-08T23:23:19.996266+0000 mgr.x (mgr.14150) 317 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:20.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:20 vm02 bash[17457]: audit 2026-03-08T23:23:19.996266+0000 mgr.x (mgr.14150) 317 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:20.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:19 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:23:20.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:20 vm04 bash[19918]: audit 2026-03-08T23:23:19.996266+0000 mgr.x (mgr.14150) 317 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:20.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:20 vm04 bash[19918]: audit 2026-03-08T23:23:19.996266+0000 mgr.x (mgr.14150) 317 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:20.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:20 vm10 bash[20034]: audit 2026-03-08T23:23:19.996266+0000 mgr.x (mgr.14150) 317 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:20.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:20 vm10 bash[20034]: audit 2026-03-08T23:23:19.996266+0000 mgr.x (mgr.14150) 317 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:21.190 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:20 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:23:21.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:21 vm04 bash[19918]: cluster 2026-03-08T23:23:20.228705+0000 mgr.x (mgr.14150) 318 : cluster [DBG] pgmap v268: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:21.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:21 vm04 bash[19918]: cluster 2026-03-08T23:23:20.228705+0000 mgr.x (mgr.14150) 318 : cluster [DBG] pgmap v268: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:21.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:21 vm04 bash[19918]: audit 2026-03-08T23:23:20.932786+0000 mgr.x (mgr.14150) 319 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:21.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:21 vm04 bash[19918]: audit 2026-03-08T23:23:20.932786+0000 mgr.x (mgr.14150) 319 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:21.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:21 vm02 bash[17457]: cluster 2026-03-08T23:23:20.228705+0000 mgr.x (mgr.14150) 318 : cluster [DBG] pgmap v268: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:21.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:21 vm02 bash[17457]: cluster 2026-03-08T23:23:20.228705+0000 mgr.x (mgr.14150) 318 : cluster [DBG] pgmap v268: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:21.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:21 vm02 bash[17457]: audit 2026-03-08T23:23:20.932786+0000 mgr.x (mgr.14150) 319 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:21.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:21 vm02 bash[17457]: audit 2026-03-08T23:23:20.932786+0000 mgr.x (mgr.14150) 319 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:21.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:21 vm10 bash[20034]: cluster 2026-03-08T23:23:20.228705+0000 mgr.x (mgr.14150) 318 : cluster [DBG] pgmap v268: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:21.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:21 vm10 bash[20034]: cluster 2026-03-08T23:23:20.228705+0000 mgr.x (mgr.14150) 318 : cluster [DBG] pgmap v268: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:21.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:21 vm10 bash[20034]: audit 2026-03-08T23:23:20.932786+0000 mgr.x (mgr.14150) 319 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:21.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:21 vm10 bash[20034]: audit 2026-03-08T23:23:20.932786+0000 mgr.x (mgr.14150) 319 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:21.795 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:23:22.055 INFO:teuthology.orchestra.run.vm02.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-08T23:23:22.055 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 4m ago Has a FileSystem, Insufficient space (<5GB) 2026-03-08T23:23:22.055 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/vdb hdd DWNBRSTVMM02001 20.0G Yes 4m ago 2026-03-08T23:23:22.055 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/vdc hdd DWNBRSTVMM02002 20.0G Yes 4m ago 2026-03-08T23:23:22.055 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/vdd hdd DWNBRSTVMM02003 20.0G No 4m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-08T23:23:22.055 INFO:teuthology.orchestra.run.vm02.stdout:vm02 /dev/vde hdd DWNBRSTVMM02004 20.0G No 4m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-08T23:23:22.055 INFO:teuthology.orchestra.run.vm02.stdout:vm04 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 3m ago Has a FileSystem, Insufficient space (<5GB) 2026-03-08T23:23:22.055 INFO:teuthology.orchestra.run.vm02.stdout:vm04 /dev/vdb hdd DWNBRSTVMM04001 20.0G Yes 3m ago 2026-03-08T23:23:22.055 INFO:teuthology.orchestra.run.vm02.stdout:vm04 /dev/vdc hdd DWNBRSTVMM04002 20.0G No 3m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-08T23:23:22.055 INFO:teuthology.orchestra.run.vm02.stdout:vm04 /dev/vdd hdd DWNBRSTVMM04003 20.0G No 3m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-08T23:23:22.055 INFO:teuthology.orchestra.run.vm02.stdout:vm04 /dev/vde hdd DWNBRSTVMM04004 20.0G No 3m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-08T23:23:22.055 INFO:teuthology.orchestra.run.vm02.stdout:vm10 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 99s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-08T23:23:22.055 INFO:teuthology.orchestra.run.vm02.stdout:vm10 /dev/vdb hdd DWNBRSTVMM10001 20.0G Yes 99s ago 2026-03-08T23:23:22.055 INFO:teuthology.orchestra.run.vm02.stdout:vm10 /dev/vdc hdd DWNBRSTVMM10002 20.0G No 99s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-08T23:23:22.055 INFO:teuthology.orchestra.run.vm02.stdout:vm10 /dev/vdd hdd DWNBRSTVMM10003 20.0G No 99s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-08T23:23:22.055 INFO:teuthology.orchestra.run.vm02.stdout:vm10 /dev/vde hdd DWNBRSTVMM10004 20.0G No 99s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-08T23:23:22.105 INFO:teuthology.run_tasks:Running task install... 2026-03-08T23:23:22.107 DEBUG:teuthology.task.install:project ceph 2026-03-08T23:23:22.107 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-08T23:23:22.107 DEBUG:teuthology.task.install:config {'extra_system_packages': {'deb': ['open-iscsi', 'multipath-tools', 'python3-xmltodict', 'python3-jmespath'], 'rpm': ['iscsi-initiator-utils', 'device-mapper-multipath', 'bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-08T23:23:22.107 INFO:teuthology.task.install:Using flavor: default 2026-03-08T23:23:22.109 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-08T23:23:22.109 INFO:teuthology.task.install:extra packages: [] 2026-03-08T23:23:22.109 DEBUG:teuthology.orchestra.run.vm02:> sudo apt-key list | grep Ceph 2026-03-08T23:23:22.109 DEBUG:teuthology.orchestra.run.vm04:> sudo apt-key list | grep Ceph 2026-03-08T23:23:22.110 DEBUG:teuthology.orchestra.run.vm10:> sudo apt-key list | grep Ceph 2026-03-08T23:23:22.146 INFO:teuthology.orchestra.run.vm02.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-08T23:23:22.147 INFO:teuthology.orchestra.run.vm04.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-08T23:23:22.150 INFO:teuthology.orchestra.run.vm10.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-08T23:23:22.165 INFO:teuthology.orchestra.run.vm02.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-08T23:23:22.165 INFO:teuthology.orchestra.run.vm02.stdout:uid [ unknown] Ceph.com (release key) 2026-03-08T23:23:22.166 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-08T23:23:22.166 INFO:teuthology.task.install.deb:Installing system (non-project) packages: open-iscsi, multipath-tools, python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-08T23:23:22.166 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T23:23:22.197 INFO:teuthology.orchestra.run.vm10.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-08T23:23:22.197 INFO:teuthology.orchestra.run.vm10.stdout:uid [ unknown] Ceph.com (release key) 2026-03-08T23:23:22.197 INFO:teuthology.orchestra.run.vm04.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-08T23:23:22.197 INFO:teuthology.orchestra.run.vm04.stdout:uid [ unknown] Ceph.com (release key) 2026-03-08T23:23:22.197 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-08T23:23:22.197 INFO:teuthology.task.install.deb:Installing system (non-project) packages: open-iscsi, multipath-tools, python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-08T23:23:22.197 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T23:23:22.198 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-08T23:23:22.198 INFO:teuthology.task.install.deb:Installing system (non-project) packages: open-iscsi, multipath-tools, python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-08T23:23:22.198 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T23:23:22.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:22 vm04 bash[19918]: audit 2026-03-08T23:23:22.054142+0000 mgr.x (mgr.14150) 320 : audit [DBG] from='client.14691 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:22.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:22 vm04 bash[19918]: audit 2026-03-08T23:23:22.054142+0000 mgr.x (mgr.14150) 320 : audit [DBG] from='client.14691 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:22.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:22 vm02 bash[17457]: audit 2026-03-08T23:23:22.054142+0000 mgr.x (mgr.14150) 320 : audit [DBG] from='client.14691 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:22.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:22 vm02 bash[17457]: audit 2026-03-08T23:23:22.054142+0000 mgr.x (mgr.14150) 320 : audit [DBG] from='client.14691 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:22.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:22 vm10 bash[20034]: audit 2026-03-08T23:23:22.054142+0000 mgr.x (mgr.14150) 320 : audit [DBG] from='client.14691 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:22.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:22 vm10 bash[20034]: audit 2026-03-08T23:23:22.054142+0000 mgr.x (mgr.14150) 320 : audit [DBG] from='client.14691 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:23:22.782 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-08T23:23:22.782 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-08T23:23:22.834 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-08T23:23:22.834 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-08T23:23:22.910 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-08T23:23:22.910 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-08T23:23:23.253 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-08T23:23:23.253 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-08T23:23:23.261 DEBUG:teuthology.orchestra.run.vm02:> sudo apt-get update 2026-03-08T23:23:23.319 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-08T23:23:23.319 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-08T23:23:23.327 DEBUG:teuthology.orchestra.run.vm10:> sudo apt-get update 2026-03-08T23:23:23.428 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-08T23:23:23.428 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-08T23:23:23.437 DEBUG:teuthology.orchestra.run.vm04:> sudo apt-get update 2026-03-08T23:23:23.447 INFO:teuthology.orchestra.run.vm02.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-08T23:23:23.453 INFO:teuthology.orchestra.run.vm02.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-08T23:23:23.459 INFO:teuthology.orchestra.run.vm02.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-08T23:23:23.488 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:23 vm04 bash[19918]: cluster 2026-03-08T23:23:22.228999+0000 mgr.x (mgr.14150) 321 : cluster [DBG] pgmap v269: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:23.488 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:23 vm04 bash[19918]: cluster 2026-03-08T23:23:22.228999+0000 mgr.x (mgr.14150) 321 : cluster [DBG] pgmap v269: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:23.492 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:23 vm10 bash[20034]: cluster 2026-03-08T23:23:22.228999+0000 mgr.x (mgr.14150) 321 : cluster [DBG] pgmap v269: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:23.492 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:23 vm10 bash[20034]: cluster 2026-03-08T23:23:22.228999+0000 mgr.x (mgr.14150) 321 : cluster [DBG] pgmap v269: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:23.510 INFO:teuthology.orchestra.run.vm10.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-08T23:23:23.516 INFO:teuthology.orchestra.run.vm10.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-08T23:23:23.523 INFO:teuthology.orchestra.run.vm10.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-08T23:23:23.632 INFO:teuthology.orchestra.run.vm04.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-08T23:23:23.634 INFO:teuthology.orchestra.run.vm04.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-08T23:23:23.643 INFO:teuthology.orchestra.run.vm04.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-08T23:23:23.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:23 vm02 bash[17457]: cluster 2026-03-08T23:23:22.228999+0000 mgr.x (mgr.14150) 321 : cluster [DBG] pgmap v269: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:23.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:23 vm02 bash[17457]: cluster 2026-03-08T23:23:22.228999+0000 mgr.x (mgr.14150) 321 : cluster [DBG] pgmap v269: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:23.732 INFO:teuthology.orchestra.run.vm04.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-08T23:23:23.825 INFO:teuthology.orchestra.run.vm02.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-08T23:23:23.881 INFO:teuthology.orchestra.run.vm10.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-08T23:23:23.940 INFO:teuthology.orchestra.run.vm10.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-08T23:23:23.955 INFO:teuthology.orchestra.run.vm02.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-08T23:23:24.027 INFO:teuthology.orchestra.run.vm04.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-08T23:23:24.052 INFO:teuthology.orchestra.run.vm10.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-08T23:23:24.071 INFO:teuthology.orchestra.run.vm02.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-08T23:23:24.147 INFO:teuthology.orchestra.run.vm04.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-08T23:23:24.163 INFO:teuthology.orchestra.run.vm10.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-08T23:23:24.187 INFO:teuthology.orchestra.run.vm02.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-08T23:23:24.267 INFO:teuthology.orchestra.run.vm04.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-08T23:23:24.275 INFO:teuthology.orchestra.run.vm10.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-08T23:23:24.302 INFO:teuthology.orchestra.run.vm02.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-08T23:23:24.352 INFO:teuthology.orchestra.run.vm10.stdout:Fetched 25.8 kB in 1s (29.9 kB/s) 2026-03-08T23:23:24.381 INFO:teuthology.orchestra.run.vm02.stdout:Fetched 25.8 kB in 1s (26.8 kB/s) 2026-03-08T23:23:24.387 INFO:teuthology.orchestra.run.vm04.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-08T23:23:24.587 INFO:teuthology.orchestra.run.vm04.stdout:Fetched 25.8 kB in 1s (26.1 kB/s) 2026-03-08T23:23:25.070 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:23:25.081 DEBUG:teuthology.orchestra.run.vm10:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-08T23:23:25.115 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:23:25.118 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:23:25.129 DEBUG:teuthology.orchestra.run.vm02:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-08T23:23:25.164 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:23:25.313 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:23:25.324 DEBUG:teuthology.orchestra.run.vm04:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-08T23:23:25.337 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:23:25.337 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:23:25.378 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:23:25.379 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:23:25.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:25 vm02 bash[17457]: cluster 2026-03-08T23:23:24.229347+0000 mgr.x (mgr.14150) 322 : cluster [DBG] pgmap v270: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:25.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:25 vm02 bash[17457]: cluster 2026-03-08T23:23:24.229347+0000 mgr.x (mgr.14150) 322 : cluster [DBG] pgmap v270: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:25.401 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:23:25.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:25 vm10 bash[20034]: cluster 2026-03-08T23:23:24.229347+0000 mgr.x (mgr.14150) 322 : cluster [DBG] pgmap v270: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:25.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:25 vm10 bash[20034]: cluster 2026-03-08T23:23:24.229347+0000 mgr.x (mgr.14150) 322 : cluster [DBG] pgmap v270: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:25.501 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:23:25.501 INFO:teuthology.orchestra.run.vm10.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:23:25.501 INFO:teuthology.orchestra.run.vm10.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-08T23:23:25.501 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:23:25.502 INFO:teuthology.orchestra.run.vm10.stdout:The following additional packages will be installed: 2026-03-08T23:23:25.502 INFO:teuthology.orchestra.run.vm10.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-08T23:23:25.502 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-08T23:23:25.502 INFO:teuthology.orchestra.run.vm10.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-08T23:23:25.502 INFO:teuthology.orchestra.run.vm10.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T23:23:25.502 INFO:teuthology.orchestra.run.vm10.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-08T23:23:25.503 INFO:teuthology.orchestra.run.vm10.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T23:23:25.503 INFO:teuthology.orchestra.run.vm10.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:23:25.503 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:23:25.503 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-08T23:23:25.503 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T23:23:25.503 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T23:23:25.503 INFO:teuthology.orchestra.run.vm10.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T23:23:25.503 INFO:teuthology.orchestra.run.vm10.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-08T23:23:25.503 INFO:teuthology.orchestra.run.vm10.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-08T23:23:25.503 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-08T23:23:25.503 INFO:teuthology.orchestra.run.vm10.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T23:23:25.503 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T23:23:25.503 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T23:23:25.503 INFO:teuthology.orchestra.run.vm10.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-08T23:23:25.503 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:23:25.503 INFO:teuthology.orchestra.run.vm10.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T23:23:25.504 INFO:teuthology.orchestra.run.vm10.stdout:Suggested packages: 2026-03-08T23:23:25.504 INFO:teuthology.orchestra.run.vm10.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-08T23:23:25.504 INFO:teuthology.orchestra.run.vm10.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-08T23:23:25.504 INFO:teuthology.orchestra.run.vm10.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-08T23:23:25.504 INFO:teuthology.orchestra.run.vm10.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-08T23:23:25.504 INFO:teuthology.orchestra.run.vm10.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-08T23:23:25.504 INFO:teuthology.orchestra.run.vm10.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-08T23:23:25.504 INFO:teuthology.orchestra.run.vm10.stdout: smart-notifier mailx | mailutils 2026-03-08T23:23:25.504 INFO:teuthology.orchestra.run.vm10.stdout:Recommended packages: 2026-03-08T23:23:25.504 INFO:teuthology.orchestra.run.vm10.stdout: btrfs-tools 2026-03-08T23:23:25.545 INFO:teuthology.orchestra.run.vm10.stdout:The following NEW packages will be installed: 2026-03-08T23:23:25.545 INFO:teuthology.orchestra.run.vm10.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-08T23:23:25.545 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-08T23:23:25.545 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-08T23:23:25.545 INFO:teuthology.orchestra.run.vm10.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-08T23:23:25.545 INFO:teuthology.orchestra.run.vm10.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-08T23:23:25.545 INFO:teuthology.orchestra.run.vm10.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-08T23:23:25.545 INFO:teuthology.orchestra.run.vm10.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-08T23:23:25.546 INFO:teuthology.orchestra.run.vm10.stdout: socat unzip xmlstarlet zip 2026-03-08T23:23:25.547 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be upgraded: 2026-03-08T23:23:25.548 INFO:teuthology.orchestra.run.vm10.stdout: librados2 librbd1 2026-03-08T23:23:25.593 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:23:25.593 INFO:teuthology.orchestra.run.vm02.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:23:25.593 INFO:teuthology.orchestra.run.vm02.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-08T23:23:25.594 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:23:25.594 INFO:teuthology.orchestra.run.vm02.stdout:The following additional packages will be installed: 2026-03-08T23:23:25.594 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-08T23:23:25.594 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-08T23:23:25.594 INFO:teuthology.orchestra.run.vm02.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-08T23:23:25.594 INFO:teuthology.orchestra.run.vm02.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T23:23:25.594 INFO:teuthology.orchestra.run.vm02.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-08T23:23:25.595 INFO:teuthology.orchestra.run.vm02.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T23:23:25.595 INFO:teuthology.orchestra.run.vm02.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:23:25.595 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:23:25.595 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-08T23:23:25.595 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T23:23:25.595 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T23:23:25.595 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T23:23:25.595 INFO:teuthology.orchestra.run.vm02.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-08T23:23:25.595 INFO:teuthology.orchestra.run.vm02.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-08T23:23:25.595 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-08T23:23:25.595 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T23:23:25.595 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T23:23:25.595 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T23:23:25.595 INFO:teuthology.orchestra.run.vm02.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-08T23:23:25.595 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:23:25.595 INFO:teuthology.orchestra.run.vm02.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T23:23:25.596 INFO:teuthology.orchestra.run.vm02.stdout:Suggested packages: 2026-03-08T23:23:25.596 INFO:teuthology.orchestra.run.vm02.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-08T23:23:25.596 INFO:teuthology.orchestra.run.vm02.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-08T23:23:25.596 INFO:teuthology.orchestra.run.vm02.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-08T23:23:25.596 INFO:teuthology.orchestra.run.vm02.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-08T23:23:25.596 INFO:teuthology.orchestra.run.vm02.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-08T23:23:25.596 INFO:teuthology.orchestra.run.vm02.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-08T23:23:25.596 INFO:teuthology.orchestra.run.vm02.stdout: smart-notifier mailx | mailutils 2026-03-08T23:23:25.596 INFO:teuthology.orchestra.run.vm02.stdout:Recommended packages: 2026-03-08T23:23:25.596 INFO:teuthology.orchestra.run.vm02.stdout: btrfs-tools 2026-03-08T23:23:25.608 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:23:25.608 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:23:25.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:25 vm04 bash[19918]: cluster 2026-03-08T23:23:24.229347+0000 mgr.x (mgr.14150) 322 : cluster [DBG] pgmap v270: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:25.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:25 vm04 bash[19918]: cluster 2026-03-08T23:23:24.229347+0000 mgr.x (mgr.14150) 322 : cluster [DBG] pgmap v270: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:25.635 INFO:teuthology.orchestra.run.vm02.stdout:The following NEW packages will be installed: 2026-03-08T23:23:25.635 INFO:teuthology.orchestra.run.vm02.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-08T23:23:25.635 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-08T23:23:25.635 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-08T23:23:25.635 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-08T23:23:25.635 INFO:teuthology.orchestra.run.vm02.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-08T23:23:25.635 INFO:teuthology.orchestra.run.vm02.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-08T23:23:25.635 INFO:teuthology.orchestra.run.vm02.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: socat unzip xmlstarlet zip 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be upgraded: 2026-03-08T23:23:25.636 INFO:teuthology.orchestra.run.vm02.stdout: librados2 librbd1 2026-03-08T23:23:25.648 INFO:teuthology.orchestra.run.vm10.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:23:25.648 INFO:teuthology.orchestra.run.vm10.stdout:Need to get 178 MB of archives. 2026-03-08T23:23:25.648 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-08T23:23:25.648 INFO:teuthology.orchestra.run.vm10.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-08T23:23:25.689 INFO:teuthology.orchestra.run.vm10.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-08T23:23:25.689 INFO:teuthology.orchestra.run.vm10.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-08T23:23:25.698 INFO:teuthology.orchestra.run.vm10.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-08T23:23:25.712 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:23:25.712 INFO:teuthology.orchestra.run.vm04.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:23:25.712 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-08T23:23:25.712 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:23:25.712 INFO:teuthology.orchestra.run.vm04.stdout:The following additional packages will be installed: 2026-03-08T23:23:25.712 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout:Suggested packages: 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: smart-notifier mailx | mailutils 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout:Recommended packages: 2026-03-08T23:23:25.713 INFO:teuthology.orchestra.run.vm04.stdout: btrfs-tools 2026-03-08T23:23:25.726 INFO:teuthology.orchestra.run.vm10.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-08T23:23:25.727 INFO:teuthology.orchestra.run.vm10.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-08T23:23:25.734 INFO:teuthology.orchestra.run.vm10.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-08T23:23:25.736 INFO:teuthology.orchestra.run.vm10.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-08T23:23:25.737 INFO:teuthology.orchestra.run.vm10.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-08T23:23:25.737 INFO:teuthology.orchestra.run.vm10.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-08T23:23:25.737 INFO:teuthology.orchestra.run.vm10.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout:The following NEW packages will be installed: 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: socat unzip xmlstarlet zip 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be upgraded: 2026-03-08T23:23:25.748 INFO:teuthology.orchestra.run.vm04.stdout: librados2 librbd1 2026-03-08T23:23:25.749 INFO:teuthology.orchestra.run.vm10.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-08T23:23:25.753 INFO:teuthology.orchestra.run.vm10.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-08T23:23:25.754 INFO:teuthology.orchestra.run.vm10.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-08T23:23:25.755 INFO:teuthology.orchestra.run.vm10.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-08T23:23:25.755 INFO:teuthology.orchestra.run.vm10.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-08T23:23:25.756 INFO:teuthology.orchestra.run.vm10.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-08T23:23:25.757 INFO:teuthology.orchestra.run.vm10.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-08T23:23:25.759 INFO:teuthology.orchestra.run.vm10.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-08T23:23:25.759 INFO:teuthology.orchestra.run.vm10.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-08T23:23:25.759 INFO:teuthology.orchestra.run.vm10.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-08T23:23:25.768 INFO:teuthology.orchestra.run.vm10.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-08T23:23:25.768 INFO:teuthology.orchestra.run.vm10.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-08T23:23:25.768 INFO:teuthology.orchestra.run.vm10.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-08T23:23:25.768 INFO:teuthology.orchestra.run.vm10.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-08T23:23:25.768 INFO:teuthology.orchestra.run.vm10.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-08T23:23:25.768 INFO:teuthology.orchestra.run.vm10.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-08T23:23:25.768 INFO:teuthology.orchestra.run.vm10.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-08T23:23:25.775 INFO:teuthology.orchestra.run.vm10.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-08T23:23:25.776 INFO:teuthology.orchestra.run.vm10.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-08T23:23:25.776 INFO:teuthology.orchestra.run.vm10.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-08T23:23:25.784 INFO:teuthology.orchestra.run.vm10.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-08T23:23:25.784 INFO:teuthology.orchestra.run.vm10.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-08T23:23:25.784 INFO:teuthology.orchestra.run.vm10.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-08T23:23:25.785 INFO:teuthology.orchestra.run.vm10.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-08T23:23:25.785 INFO:teuthology.orchestra.run.vm10.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-08T23:23:25.785 INFO:teuthology.orchestra.run.vm10.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-08T23:23:25.789 INFO:teuthology.orchestra.run.vm10.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-08T23:23:25.791 INFO:teuthology.orchestra.run.vm10.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-08T23:23:25.792 INFO:teuthology.orchestra.run.vm10.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-08T23:23:25.792 INFO:teuthology.orchestra.run.vm10.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-08T23:23:25.800 INFO:teuthology.orchestra.run.vm10.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-08T23:23:25.801 INFO:teuthology.orchestra.run.vm10.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-08T23:23:25.802 INFO:teuthology.orchestra.run.vm10.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-08T23:23:25.804 INFO:teuthology.orchestra.run.vm10.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-08T23:23:25.804 INFO:teuthology.orchestra.run.vm10.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-08T23:23:25.804 INFO:teuthology.orchestra.run.vm10.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-08T23:23:25.832 INFO:teuthology.orchestra.run.vm10.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-08T23:23:25.832 INFO:teuthology.orchestra.run.vm10.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-08T23:23:25.833 INFO:teuthology.orchestra.run.vm10.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-08T23:23:25.842 INFO:teuthology.orchestra.run.vm10.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-08T23:23:25.842 INFO:teuthology.orchestra.run.vm10.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-08T23:23:25.842 INFO:teuthology.orchestra.run.vm10.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-08T23:23:25.842 INFO:teuthology.orchestra.run.vm10.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-08T23:23:25.842 INFO:teuthology.orchestra.run.vm10.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-08T23:23:25.843 INFO:teuthology.orchestra.run.vm10.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-08T23:23:25.845 INFO:teuthology.orchestra.run.vm10.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-08T23:23:25.849 INFO:teuthology.orchestra.run.vm04.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:23:25.849 INFO:teuthology.orchestra.run.vm04.stdout:Need to get 178 MB of archives. 2026-03-08T23:23:25.849 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-08T23:23:25.849 INFO:teuthology.orchestra.run.vm04.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-08T23:23:25.852 INFO:teuthology.orchestra.run.vm10.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-08T23:23:25.853 INFO:teuthology.orchestra.run.vm10.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-08T23:23:25.853 INFO:teuthology.orchestra.run.vm10.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-08T23:23:25.857 INFO:teuthology.orchestra.run.vm10.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-08T23:23:25.859 INFO:teuthology.orchestra.run.vm10.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-08T23:23:25.859 INFO:teuthology.orchestra.run.vm10.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-08T23:23:25.860 INFO:teuthology.orchestra.run.vm10.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-08T23:23:25.864 INFO:teuthology.orchestra.run.vm10.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-08T23:23:25.864 INFO:teuthology.orchestra.run.vm10.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-08T23:23:25.867 INFO:teuthology.orchestra.run.vm10.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-08T23:23:25.870 INFO:teuthology.orchestra.run.vm10.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-08T23:23:25.870 INFO:teuthology.orchestra.run.vm10.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-08T23:23:25.870 INFO:teuthology.orchestra.run.vm10.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-08T23:23:25.872 INFO:teuthology.orchestra.run.vm10.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-08T23:23:25.872 INFO:teuthology.orchestra.run.vm10.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-08T23:23:25.879 INFO:teuthology.orchestra.run.vm10.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-08T23:23:25.880 INFO:teuthology.orchestra.run.vm10.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-08T23:23:25.880 INFO:teuthology.orchestra.run.vm10.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-08T23:23:25.882 INFO:teuthology.orchestra.run.vm10.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-08T23:23:25.882 INFO:teuthology.orchestra.run.vm10.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-08T23:23:25.892 INFO:teuthology.orchestra.run.vm04.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-08T23:23:25.893 INFO:teuthology.orchestra.run.vm04.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-08T23:23:25.901 INFO:teuthology.orchestra.run.vm10.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-08T23:23:25.906 INFO:teuthology.orchestra.run.vm04.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-08T23:23:25.933 INFO:teuthology.orchestra.run.vm04.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-08T23:23:25.935 INFO:teuthology.orchestra.run.vm04.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-08T23:23:25.941 INFO:teuthology.orchestra.run.vm04.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-08T23:23:25.943 INFO:teuthology.orchestra.run.vm04.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-08T23:23:25.944 INFO:teuthology.orchestra.run.vm04.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-08T23:23:25.944 INFO:teuthology.orchestra.run.vm04.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-08T23:23:25.945 INFO:teuthology.orchestra.run.vm04.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-08T23:23:25.963 INFO:teuthology.orchestra.run.vm04.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-08T23:23:25.963 INFO:teuthology.orchestra.run.vm04.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-08T23:23:25.964 INFO:teuthology.orchestra.run.vm04.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-08T23:23:25.965 INFO:teuthology.orchestra.run.vm04.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-08T23:23:25.965 INFO:teuthology.orchestra.run.vm04.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-08T23:23:25.966 INFO:teuthology.orchestra.run.vm04.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-08T23:23:25.967 INFO:teuthology.orchestra.run.vm04.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-08T23:23:25.970 INFO:teuthology.orchestra.run.vm04.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-08T23:23:25.970 INFO:teuthology.orchestra.run.vm04.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-08T23:23:25.971 INFO:teuthology.orchestra.run.vm04.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-08T23:23:25.979 INFO:teuthology.orchestra.run.vm04.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-08T23:23:25.979 INFO:teuthology.orchestra.run.vm04.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-08T23:23:25.979 INFO:teuthology.orchestra.run.vm04.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-08T23:23:25.979 INFO:teuthology.orchestra.run.vm04.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-08T23:23:25.979 INFO:teuthology.orchestra.run.vm04.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-08T23:23:25.979 INFO:teuthology.orchestra.run.vm04.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-08T23:23:25.980 INFO:teuthology.orchestra.run.vm04.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-08T23:23:25.981 INFO:teuthology.orchestra.run.vm04.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-08T23:23:25.981 INFO:teuthology.orchestra.run.vm04.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-08T23:23:25.989 INFO:teuthology.orchestra.run.vm04.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-08T23:23:25.990 INFO:teuthology.orchestra.run.vm04.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-08T23:23:25.990 INFO:teuthology.orchestra.run.vm04.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-08T23:23:25.990 INFO:teuthology.orchestra.run.vm04.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-08T23:23:25.991 INFO:teuthology.orchestra.run.vm04.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-08T23:23:25.991 INFO:teuthology.orchestra.run.vm04.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-08T23:23:25.991 INFO:teuthology.orchestra.run.vm04.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-08T23:23:25.997 INFO:teuthology.orchestra.run.vm04.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-08T23:23:25.997 INFO:teuthology.orchestra.run.vm04.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-08T23:23:25.997 INFO:teuthology.orchestra.run.vm04.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-08T23:23:26.005 INFO:teuthology.orchestra.run.vm04.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-08T23:23:26.006 INFO:teuthology.orchestra.run.vm04.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-08T23:23:26.008 INFO:teuthology.orchestra.run.vm04.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-08T23:23:26.009 INFO:teuthology.orchestra.run.vm04.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-08T23:23:26.010 INFO:teuthology.orchestra.run.vm04.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-08T23:23:26.010 INFO:teuthology.orchestra.run.vm04.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-08T23:23:26.011 INFO:teuthology.orchestra.run.vm04.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-08T23:23:26.041 INFO:teuthology.orchestra.run.vm04.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-08T23:23:26.042 INFO:teuthology.orchestra.run.vm04.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-08T23:23:26.042 INFO:teuthology.orchestra.run.vm04.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-08T23:23:26.050 INFO:teuthology.orchestra.run.vm04.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-08T23:23:26.050 INFO:teuthology.orchestra.run.vm04.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-08T23:23:26.050 INFO:teuthology.orchestra.run.vm04.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-08T23:23:26.051 INFO:teuthology.orchestra.run.vm04.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-08T23:23:26.051 INFO:teuthology.orchestra.run.vm04.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-08T23:23:26.051 INFO:teuthology.orchestra.run.vm04.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-08T23:23:26.053 INFO:teuthology.orchestra.run.vm04.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-08T23:23:26.060 INFO:teuthology.orchestra.run.vm04.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-08T23:23:26.061 INFO:teuthology.orchestra.run.vm04.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-08T23:23:26.062 INFO:teuthology.orchestra.run.vm04.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-08T23:23:26.065 INFO:teuthology.orchestra.run.vm04.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-08T23:23:26.067 INFO:teuthology.orchestra.run.vm04.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-08T23:23:26.067 INFO:teuthology.orchestra.run.vm04.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-08T23:23:26.068 INFO:teuthology.orchestra.run.vm04.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-08T23:23:26.072 INFO:teuthology.orchestra.run.vm04.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-08T23:23:26.073 INFO:teuthology.orchestra.run.vm04.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-08T23:23:26.075 INFO:teuthology.orchestra.run.vm04.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-08T23:23:26.076 INFO:teuthology.orchestra.run.vm04.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-08T23:23:26.077 INFO:teuthology.orchestra.run.vm04.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-08T23:23:26.077 INFO:teuthology.orchestra.run.vm04.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-08T23:23:26.078 INFO:teuthology.orchestra.run.vm04.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-08T23:23:26.079 INFO:teuthology.orchestra.run.vm04.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-08T23:23:26.085 INFO:teuthology.orchestra.run.vm04.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-08T23:23:26.097 INFO:teuthology.orchestra.run.vm04.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-08T23:23:26.097 INFO:teuthology.orchestra.run.vm04.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-08T23:23:26.098 INFO:teuthology.orchestra.run.vm04.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-08T23:23:26.098 INFO:teuthology.orchestra.run.vm04.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-08T23:23:26.111 INFO:teuthology.orchestra.run.vm04.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-08T23:23:26.112 INFO:teuthology.orchestra.run.vm02.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:23:26.112 INFO:teuthology.orchestra.run.vm02.stdout:Need to get 178 MB of archives. 2026-03-08T23:23:26.112 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-08T23:23:26.112 INFO:teuthology.orchestra.run.vm02.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-08T23:23:26.179 INFO:teuthology.orchestra.run.vm10.stdout:Get:79 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-08T23:23:26.223 INFO:teuthology.orchestra.run.vm02.stdout:Get:2 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-08T23:23:26.363 INFO:teuthology.orchestra.run.vm04.stdout:Get:79 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-08T23:23:26.601 INFO:teuthology.orchestra.run.vm02.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-08T23:23:26.617 INFO:teuthology.orchestra.run.vm02.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-08T23:23:26.716 INFO:teuthology.orchestra.run.vm02.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-08T23:23:27.004 INFO:teuthology.orchestra.run.vm02.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-08T23:23:27.018 INFO:teuthology.orchestra.run.vm02.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-08T23:23:27.030 INFO:teuthology.orchestra.run.vm10.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-08T23:23:27.055 INFO:teuthology.orchestra.run.vm02.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-08T23:23:27.060 INFO:teuthology.orchestra.run.vm02.stdout:Get:9 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-08T23:23:27.068 INFO:teuthology.orchestra.run.vm02.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-08T23:23:27.070 INFO:teuthology.orchestra.run.vm02.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-08T23:23:27.071 INFO:teuthology.orchestra.run.vm02.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-08T23:23:27.072 INFO:teuthology.orchestra.run.vm02.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-08T23:23:27.096 INFO:teuthology.orchestra.run.vm02.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-08T23:23:27.101 INFO:teuthology.orchestra.run.vm02.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-08T23:23:27.106 INFO:teuthology.orchestra.run.vm02.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-08T23:23:27.158 INFO:teuthology.orchestra.run.vm10.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-08T23:23:27.170 INFO:teuthology.orchestra.run.vm10.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-08T23:23:27.176 INFO:teuthology.orchestra.run.vm10.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-08T23:23:27.176 INFO:teuthology.orchestra.run.vm10.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-08T23:23:27.180 INFO:teuthology.orchestra.run.vm10.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-08T23:23:27.181 INFO:teuthology.orchestra.run.vm10.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-08T23:23:27.188 INFO:teuthology.orchestra.run.vm10.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-08T23:23:27.203 INFO:teuthology.orchestra.run.vm02.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-08T23:23:27.204 INFO:teuthology.orchestra.run.vm02.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-08T23:23:27.206 INFO:teuthology.orchestra.run.vm02.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-08T23:23:27.210 INFO:teuthology.orchestra.run.vm02.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-08T23:23:27.212 INFO:teuthology.orchestra.run.vm02.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-08T23:23:27.212 INFO:teuthology.orchestra.run.vm02.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-08T23:23:27.213 INFO:teuthology.orchestra.run.vm02.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-08T23:23:27.214 INFO:teuthology.orchestra.run.vm02.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-08T23:23:27.214 INFO:teuthology.orchestra.run.vm02.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-08T23:23:27.248 INFO:teuthology.orchestra.run.vm04.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-08T23:23:27.312 INFO:teuthology.orchestra.run.vm02.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-08T23:23:27.312 INFO:teuthology.orchestra.run.vm02.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-08T23:23:27.312 INFO:teuthology.orchestra.run.vm02.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-08T23:23:27.329 INFO:teuthology.orchestra.run.vm02.stdout:Get:29 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-08T23:23:27.332 INFO:teuthology.orchestra.run.vm02.stdout:Get:30 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-08T23:23:27.333 INFO:teuthology.orchestra.run.vm02.stdout:Get:31 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-08T23:23:27.333 INFO:teuthology.orchestra.run.vm02.stdout:Get:32 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-08T23:23:27.334 INFO:teuthology.orchestra.run.vm02.stdout:Get:33 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-08T23:23:27.334 INFO:teuthology.orchestra.run.vm02.stdout:Get:34 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-08T23:23:27.335 INFO:teuthology.orchestra.run.vm02.stdout:Get:35 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-08T23:23:27.383 INFO:teuthology.orchestra.run.vm04.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-08T23:23:27.414 INFO:teuthology.orchestra.run.vm02.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-08T23:23:27.414 INFO:teuthology.orchestra.run.vm02.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-08T23:23:27.418 INFO:teuthology.orchestra.run.vm02.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-08T23:23:27.418 INFO:teuthology.orchestra.run.vm02.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-08T23:23:27.418 INFO:teuthology.orchestra.run.vm02.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-08T23:23:27.419 INFO:teuthology.orchestra.run.vm02.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-08T23:23:27.420 INFO:teuthology.orchestra.run.vm02.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-08T23:23:27.489 INFO:teuthology.orchestra.run.vm04.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-08T23:23:27.494 INFO:teuthology.orchestra.run.vm04.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-08T23:23:27.495 INFO:teuthology.orchestra.run.vm04.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-08T23:23:27.497 INFO:teuthology.orchestra.run.vm04.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-08T23:23:27.499 INFO:teuthology.orchestra.run.vm04.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-08T23:23:27.504 INFO:teuthology.orchestra.run.vm04.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-08T23:23:27.516 INFO:teuthology.orchestra.run.vm02.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-08T23:23:27.517 INFO:teuthology.orchestra.run.vm02.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-08T23:23:27.518 INFO:teuthology.orchestra.run.vm02.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-08T23:23:27.527 INFO:teuthology.orchestra.run.vm10.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-08T23:23:27.528 INFO:teuthology.orchestra.run.vm10.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-08T23:23:27.533 INFO:teuthology.orchestra.run.vm10.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-08T23:23:27.618 INFO:teuthology.orchestra.run.vm02.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-08T23:23:27.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:27 vm04 bash[19918]: cluster 2026-03-08T23:23:26.229711+0000 mgr.x (mgr.14150) 323 : cluster [DBG] pgmap v271: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:27.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:27 vm04 bash[19918]: cluster 2026-03-08T23:23:26.229711+0000 mgr.x (mgr.14150) 323 : cluster [DBG] pgmap v271: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:27.626 INFO:teuthology.orchestra.run.vm02.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-08T23:23:27.626 INFO:teuthology.orchestra.run.vm02.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-08T23:23:27.627 INFO:teuthology.orchestra.run.vm02.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-08T23:23:27.627 INFO:teuthology.orchestra.run.vm02.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-08T23:23:27.628 INFO:teuthology.orchestra.run.vm02.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-08T23:23:27.631 INFO:teuthology.orchestra.run.vm02.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-08T23:23:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:27 vm02 bash[17457]: cluster 2026-03-08T23:23:26.229711+0000 mgr.x (mgr.14150) 323 : cluster [DBG] pgmap v271: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:27 vm02 bash[17457]: cluster 2026-03-08T23:23:26.229711+0000 mgr.x (mgr.14150) 323 : cluster [DBG] pgmap v271: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:27 vm10 bash[20034]: cluster 2026-03-08T23:23:26.229711+0000 mgr.x (mgr.14150) 323 : cluster [DBG] pgmap v271: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:27 vm10 bash[20034]: cluster 2026-03-08T23:23:26.229711+0000 mgr.x (mgr.14150) 323 : cluster [DBG] pgmap v271: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:27.720 INFO:teuthology.orchestra.run.vm02.stdout:Get:53 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-08T23:23:27.720 INFO:teuthology.orchestra.run.vm02.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-08T23:23:27.721 INFO:teuthology.orchestra.run.vm02.stdout:Get:55 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-08T23:23:27.724 INFO:teuthology.orchestra.run.vm02.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-08T23:23:27.724 INFO:teuthology.orchestra.run.vm02.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-08T23:23:27.801 INFO:teuthology.orchestra.run.vm02.stdout:Get:58 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-08T23:23:27.823 INFO:teuthology.orchestra.run.vm02.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-08T23:23:27.858 INFO:teuthology.orchestra.run.vm02.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-08T23:23:27.861 INFO:teuthology.orchestra.run.vm02.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-08T23:23:27.861 INFO:teuthology.orchestra.run.vm02.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-08T23:23:27.891 INFO:teuthology.orchestra.run.vm04.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-08T23:23:27.891 INFO:teuthology.orchestra.run.vm04.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-08T23:23:27.900 INFO:teuthology.orchestra.run.vm04.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-08T23:23:27.944 INFO:teuthology.orchestra.run.vm02.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-08T23:23:27.944 INFO:teuthology.orchestra.run.vm02.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-08T23:23:27.944 INFO:teuthology.orchestra.run.vm02.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-08T23:23:27.944 INFO:teuthology.orchestra.run.vm02.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-08T23:23:27.945 INFO:teuthology.orchestra.run.vm02.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-08T23:23:27.945 INFO:teuthology.orchestra.run.vm02.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-08T23:23:28.028 INFO:teuthology.orchestra.run.vm02.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-08T23:23:28.030 INFO:teuthology.orchestra.run.vm02.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-08T23:23:28.031 INFO:teuthology.orchestra.run.vm02.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-08T23:23:28.130 INFO:teuthology.orchestra.run.vm02.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-08T23:23:28.135 INFO:teuthology.orchestra.run.vm02.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-08T23:23:28.138 INFO:teuthology.orchestra.run.vm02.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-08T23:23:28.140 INFO:teuthology.orchestra.run.vm02.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-08T23:23:28.140 INFO:teuthology.orchestra.run.vm02.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-08T23:23:28.147 INFO:teuthology.orchestra.run.vm02.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-08T23:23:28.148 INFO:teuthology.orchestra.run.vm02.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-08T23:23:28.232 INFO:teuthology.orchestra.run.vm02.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-08T23:23:28.232 INFO:teuthology.orchestra.run.vm02.stdout:Get:80 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-08T23:23:28.233 INFO:teuthology.orchestra.run.vm02.stdout:Get:81 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-08T23:23:28.334 INFO:teuthology.orchestra.run.vm02.stdout:Get:82 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-08T23:23:28.336 INFO:teuthology.orchestra.run.vm02.stdout:Get:83 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-08T23:23:28.337 INFO:teuthology.orchestra.run.vm02.stdout:Get:84 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-08T23:23:28.349 INFO:teuthology.orchestra.run.vm02.stdout:Get:85 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-08T23:23:28.349 INFO:teuthology.orchestra.run.vm02.stdout:Get:86 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-08T23:23:28.349 INFO:teuthology.orchestra.run.vm02.stdout:Get:87 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-08T23:23:28.352 INFO:teuthology.orchestra.run.vm02.stdout:Get:88 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-08T23:23:28.436 INFO:teuthology.orchestra.run.vm02.stdout:Get:89 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-08T23:23:28.599 INFO:teuthology.orchestra.run.vm02.stdout:Get:90 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-08T23:23:28.673 INFO:teuthology.orchestra.run.vm10.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-08T23:23:28.901 INFO:teuthology.orchestra.run.vm10.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-08T23:23:28.904 INFO:teuthology.orchestra.run.vm10.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-08T23:23:28.907 INFO:teuthology.orchestra.run.vm10.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-08T23:23:28.971 INFO:teuthology.orchestra.run.vm10.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-08T23:23:29.218 INFO:teuthology.orchestra.run.vm10.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-08T23:23:29.349 INFO:teuthology.orchestra.run.vm02.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-08T23:23:29.399 INFO:teuthology.orchestra.run.vm04.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-08T23:23:29.647 INFO:teuthology.orchestra.run.vm04.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-08T23:23:29.654 INFO:teuthology.orchestra.run.vm04.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-08T23:23:29.656 INFO:teuthology.orchestra.run.vm04.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-08T23:23:29.679 INFO:teuthology.orchestra.run.vm04.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-08T23:23:29.688 INFO:teuthology.orchestra.run.vm02.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-08T23:23:29.693 INFO:teuthology.orchestra.run.vm02.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-08T23:23:29.697 INFO:teuthology.orchestra.run.vm02.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-08T23:23:29.788 INFO:teuthology.orchestra.run.vm02.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-08T23:23:29.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:29 vm04 bash[19918]: cluster 2026-03-08T23:23:28.230059+0000 mgr.x (mgr.14150) 324 : cluster [DBG] pgmap v272: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:29.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:29 vm04 bash[19918]: cluster 2026-03-08T23:23:28.230059+0000 mgr.x (mgr.14150) 324 : cluster [DBG] pgmap v272: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:29 vm02 bash[17457]: cluster 2026-03-08T23:23:28.230059+0000 mgr.x (mgr.14150) 324 : cluster [DBG] pgmap v272: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:29 vm02 bash[17457]: cluster 2026-03-08T23:23:28.230059+0000 mgr.x (mgr.14150) 324 : cluster [DBG] pgmap v272: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:29 vm10 bash[20034]: cluster 2026-03-08T23:23:28.230059+0000 mgr.x (mgr.14150) 324 : cluster [DBG] pgmap v272: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:29 vm10 bash[20034]: cluster 2026-03-08T23:23:28.230059+0000 mgr.x (mgr.14150) 324 : cluster [DBG] pgmap v272: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:30.025 INFO:teuthology.orchestra.run.vm04.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-08T23:23:30.125 INFO:teuthology.orchestra.run.vm10.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-08T23:23:30.125 INFO:teuthology.orchestra.run.vm10.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-08T23:23:30.154 INFO:teuthology.orchestra.run.vm10.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-08T23:23:30.167 INFO:teuthology.orchestra.run.vm02.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-08T23:23:30.259 INFO:teuthology.orchestra.run.vm10.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-08T23:23:30.345 INFO:teuthology.orchestra.run.vm10.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-08T23:23:30.347 INFO:teuthology.orchestra.run.vm10.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-08T23:23:30.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:30 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:23:30.396 INFO:teuthology.orchestra.run.vm10.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-08T23:23:30.764 INFO:teuthology.orchestra.run.vm10.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-08T23:23:30.764 INFO:teuthology.orchestra.run.vm10.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-08T23:23:30.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:30 vm04 bash[19918]: audit 2026-03-08T23:23:30.002539+0000 mgr.x (mgr.14150) 325 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:30.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:30 vm04 bash[19918]: audit 2026-03-08T23:23:30.002539+0000 mgr.x (mgr.14150) 325 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:30.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:30 vm02 bash[17457]: audit 2026-03-08T23:23:30.002539+0000 mgr.x (mgr.14150) 325 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:30.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:30 vm02 bash[17457]: audit 2026-03-08T23:23:30.002539+0000 mgr.x (mgr.14150) 325 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:30.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:30 vm10 bash[20034]: audit 2026-03-08T23:23:30.002539+0000 mgr.x (mgr.14150) 325 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:30.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:30 vm10 bash[20034]: audit 2026-03-08T23:23:30.002539+0000 mgr.x (mgr.14150) 325 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:31.105 INFO:teuthology.orchestra.run.vm04.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-08T23:23:31.106 INFO:teuthology.orchestra.run.vm04.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-08T23:23:31.193 INFO:teuthology.orchestra.run.vm04.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-08T23:23:31.316 INFO:teuthology.orchestra.run.vm04.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-08T23:23:31.343 INFO:teuthology.orchestra.run.vm04.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-08T23:23:31.346 INFO:teuthology.orchestra.run.vm04.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-08T23:23:31.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:30 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:23:31.468 INFO:teuthology.orchestra.run.vm04.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-08T23:23:31.552 INFO:teuthology.orchestra.run.vm02.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-08T23:23:31.552 INFO:teuthology.orchestra.run.vm02.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-08T23:23:31.587 INFO:teuthology.orchestra.run.vm02.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-08T23:23:31.738 INFO:teuthology.orchestra.run.vm02.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-08T23:23:31.814 INFO:teuthology.orchestra.run.vm02.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-08T23:23:31.819 INFO:teuthology.orchestra.run.vm02.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-08T23:23:31.867 INFO:teuthology.orchestra.run.vm04.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-08T23:23:31.867 INFO:teuthology.orchestra.run.vm04.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-08T23:23:31.955 INFO:teuthology.orchestra.run.vm02.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-08T23:23:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:31 vm04 bash[19918]: cluster 2026-03-08T23:23:30.230359+0000 mgr.x (mgr.14150) 326 : cluster [DBG] pgmap v273: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:31 vm04 bash[19918]: cluster 2026-03-08T23:23:30.230359+0000 mgr.x (mgr.14150) 326 : cluster [DBG] pgmap v273: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:31 vm04 bash[19918]: audit 2026-03-08T23:23:30.943378+0000 mgr.x (mgr.14150) 327 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:31 vm04 bash[19918]: audit 2026-03-08T23:23:30.943378+0000 mgr.x (mgr.14150) 327 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:32.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:31 vm02 bash[17457]: cluster 2026-03-08T23:23:30.230359+0000 mgr.x (mgr.14150) 326 : cluster [DBG] pgmap v273: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:32.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:31 vm02 bash[17457]: cluster 2026-03-08T23:23:30.230359+0000 mgr.x (mgr.14150) 326 : cluster [DBG] pgmap v273: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:32.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:31 vm02 bash[17457]: audit 2026-03-08T23:23:30.943378+0000 mgr.x (mgr.14150) 327 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:32.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:31 vm02 bash[17457]: audit 2026-03-08T23:23:30.943378+0000 mgr.x (mgr.14150) 327 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:31 vm10 bash[20034]: cluster 2026-03-08T23:23:30.230359+0000 mgr.x (mgr.14150) 326 : cluster [DBG] pgmap v273: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:31 vm10 bash[20034]: cluster 2026-03-08T23:23:30.230359+0000 mgr.x (mgr.14150) 326 : cluster [DBG] pgmap v273: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:31 vm10 bash[20034]: audit 2026-03-08T23:23:30.943378+0000 mgr.x (mgr.14150) 327 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:31 vm10 bash[20034]: audit 2026-03-08T23:23:30.943378+0000 mgr.x (mgr.14150) 327 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:32.476 INFO:teuthology.orchestra.run.vm02.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-08T23:23:32.476 INFO:teuthology.orchestra.run.vm02.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-08T23:23:33.033 INFO:teuthology.orchestra.run.vm10.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-08T23:23:33.089 INFO:teuthology.orchestra.run.vm10.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-08T23:23:33.089 INFO:teuthology.orchestra.run.vm10.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-08T23:23:33.679 INFO:teuthology.orchestra.run.vm10.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-08T23:23:34.082 INFO:teuthology.orchestra.run.vm10.stdout:Fetched 178 MB in 8s (21.9 MB/s) 2026-03-08T23:23:34.105 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-08T23:23:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:33 vm04 bash[19918]: cluster 2026-03-08T23:23:32.230661+0000 mgr.x (mgr.14150) 328 : cluster [DBG] pgmap v274: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:33 vm04 bash[19918]: cluster 2026-03-08T23:23:32.230661+0000 mgr.x (mgr.14150) 328 : cluster [DBG] pgmap v274: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:34.143 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-08T23:23:34.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:33 vm02 bash[17457]: cluster 2026-03-08T23:23:32.230661+0000 mgr.x (mgr.14150) 328 : cluster [DBG] pgmap v274: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:34.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:33 vm02 bash[17457]: cluster 2026-03-08T23:23:32.230661+0000 mgr.x (mgr.14150) 328 : cluster [DBG] pgmap v274: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:34.145 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-08T23:23:34.147 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-08T23:23:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:33 vm10 bash[20034]: cluster 2026-03-08T23:23:32.230661+0000 mgr.x (mgr.14150) 328 : cluster [DBG] pgmap v274: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:33 vm10 bash[20034]: cluster 2026-03-08T23:23:32.230661+0000 mgr.x (mgr.14150) 328 : cluster [DBG] pgmap v274: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:34.167 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-08T23:23:34.173 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-08T23:23:34.173 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-08T23:23:34.181 INFO:teuthology.orchestra.run.vm04.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-08T23:23:34.187 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-08T23:23:34.193 INFO:teuthology.orchestra.run.vm04.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-08T23:23:34.193 INFO:teuthology.orchestra.run.vm04.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-08T23:23:34.194 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-08T23:23:34.195 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-08T23:23:34.215 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-08T23:23:34.221 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-08T23:23:34.225 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:34.289 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-08T23:23:34.296 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-08T23:23:34.296 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:34.315 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-08T23:23:34.321 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-08T23:23:34.321 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:34.346 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-08T23:23:34.351 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-08T23:23:34.352 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-08T23:23:34.377 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:34.379 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-08T23:23:34.473 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:34.474 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-08T23:23:34.537 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libnbd0. 2026-03-08T23:23:34.544 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-08T23:23:34.545 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-08T23:23:34.562 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libcephfs2. 2026-03-08T23:23:34.568 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:34.568 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:34.596 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-rados. 2026-03-08T23:23:34.603 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:34.604 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:34.622 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-08T23:23:34.628 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:34.629 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:34.642 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-cephfs. 2026-03-08T23:23:34.647 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:34.648 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:34.667 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-08T23:23:34.673 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:34.673 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:34.691 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-08T23:23:34.696 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-08T23:23:34.697 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-08T23:23:34.704 INFO:teuthology.orchestra.run.vm04.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-08T23:23:34.713 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-prettytable. 2026-03-08T23:23:34.718 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-08T23:23:34.718 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-08T23:23:34.732 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-rbd. 2026-03-08T23:23:34.737 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:34.738 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:34.757 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-08T23:23:34.762 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-08T23:23:34.763 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-08T23:23:34.783 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-08T23:23:34.789 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-08T23:23:34.790 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-08T23:23:34.889 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-08T23:23:34.894 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-08T23:23:34.895 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-08T23:23:34.917 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package lua5.1. 2026-03-08T23:23:34.921 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-08T23:23:34.922 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-08T23:23:34.943 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package lua-any. 2026-03-08T23:23:34.948 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-08T23:23:34.949 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-08T23:23:34.963 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package zip. 2026-03-08T23:23:34.969 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-08T23:23:34.970 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking zip (3.0-12build2) ... 2026-03-08T23:23:34.988 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package unzip. 2026-03-08T23:23:34.994 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-08T23:23:34.995 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-08T23:23:35.014 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package luarocks. 2026-03-08T23:23:35.021 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-08T23:23:35.023 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-08T23:23:35.057 INFO:teuthology.orchestra.run.vm04.stdout:Fetched 178 MB in 9s (19.9 MB/s) 2026-03-08T23:23:35.071 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package librgw2. 2026-03-08T23:23:35.072 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-08T23:23:35.078 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:35.079 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:35.114 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-08T23:23:35.116 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-08T23:23:35.118 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-08T23:23:35.140 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-08T23:23:35.146 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-08T23:23:35.147 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-08T23:23:35.163 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-08T23:23:35.169 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-08T23:23:35.170 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-08T23:23:35.214 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-rgw. 2026-03-08T23:23:35.220 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:35.220 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-08T23:23:35.221 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:35.224 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-08T23:23:35.229 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:35.239 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-08T23:23:35.246 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-08T23:23:35.247 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-08T23:23:35.270 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libradosstriper1. 2026-03-08T23:23:35.274 INFO:teuthology.orchestra.run.vm02.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-08T23:23:35.276 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-08T23:23:35.276 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:35.277 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:35.282 INFO:teuthology.orchestra.run.vm02.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-08T23:23:35.282 INFO:teuthology.orchestra.run.vm02.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-08T23:23:35.284 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-08T23:23:35.285 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:35.304 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-common. 2026-03-08T23:23:35.305 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-08T23:23:35.312 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-08T23:23:35.312 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:35.313 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:35.313 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:35.338 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-08T23:23:35.345 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-08T23:23:35.347 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-08T23:23:35.373 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:35.376 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-08T23:23:35.483 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:35.485 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-08T23:23:35.558 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libnbd0. 2026-03-08T23:23:35.563 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-08T23:23:35.564 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-08T23:23:35.579 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libcephfs2. 2026-03-08T23:23:35.584 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:35.585 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:35.613 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-rados. 2026-03-08T23:23:35.619 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:35.620 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:35.642 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-08T23:23:35.648 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:35.649 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:35.665 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-cephfs. 2026-03-08T23:23:35.671 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:35.672 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:35.689 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-08T23:23:35.695 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:35.777 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:35.819 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-08T23:23:35.824 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-08T23:23:35.825 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-08T23:23:35.825 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-base. 2026-03-08T23:23:35.831 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:35.836 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:35.844 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-prettytable. 2026-03-08T23:23:35.850 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-08T23:23:35.851 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-08T23:23:35.865 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-rbd. 2026-03-08T23:23:35.870 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:35.871 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:35.981 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-08T23:23:35.983 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-08T23:23:35.984 INFO:teuthology.orchestra.run.vm02.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-08T23:23:35.987 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-08T23:23:35.988 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-08T23:23:35.988 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-08T23:23:35.989 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-08T23:23:36.006 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-cheroot. 2026-03-08T23:23:36.009 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-08T23:23:36.011 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-08T23:23:36.012 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-08T23:23:36.015 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-08T23:23:36.016 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-08T23:23:36.034 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-08T23:23:36.037 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-08T23:23:36.040 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-08T23:23:36.041 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-08T23:23:36.043 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-08T23:23:36.044 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-08T23:23:36.060 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-08T23:23:36.065 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-08T23:23:36.066 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-08T23:23:36.067 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package lua5.1. 2026-03-08T23:23:36.073 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-08T23:23:36.075 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-08T23:23:36.085 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-08T23:23:36.091 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-08T23:23:36.092 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-08T23:23:36.093 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package lua-any. 2026-03-08T23:23:36.100 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-08T23:23:36.100 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-08T23:23:36.109 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-tempora. 2026-03-08T23:23:36.114 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package zip. 2026-03-08T23:23:36.115 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-08T23:23:36.115 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-08T23:23:36.120 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-08T23:23:36.121 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking zip (3.0-12build2) ... 2026-03-08T23:23:36.134 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-portend. 2026-03-08T23:23:36.139 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package unzip. 2026-03-08T23:23:36.140 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-08T23:23:36.141 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-08T23:23:36.145 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-08T23:23:36.145 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-08T23:23:36.159 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-08T23:23:36.164 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package luarocks. 2026-03-08T23:23:36.165 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-08T23:23:36.166 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-08T23:23:36.170 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-08T23:23:36.171 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-08T23:23:36.187 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-08T23:23:36.192 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-08T23:23:36.193 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-08T23:23:36.221 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package librgw2. 2026-03-08T23:23:36.226 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:36.227 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:36.228 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-natsort. 2026-03-08T23:23:36.234 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-08T23:23:36.235 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-08T23:23:36.254 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-logutils. 2026-03-08T23:23:36.260 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-08T23:23:36.261 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-08T23:23:36.281 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-mako. 2026-03-08T23:23:36.287 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-08T23:23:36.288 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-08T23:23:36.311 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-08T23:23:36.318 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-08T23:23:36.319 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-08T23:23:36.328 INFO:teuthology.orchestra.run.vm02.stdout:Fetched 178 MB in 10s (17.2 MB/s) 2026-03-08T23:23:36.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:35 vm04 bash[19918]: cluster 2026-03-08T23:23:34.231021+0000 mgr.x (mgr.14150) 329 : cluster [DBG] pgmap v275: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:36.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:35 vm04 bash[19918]: cluster 2026-03-08T23:23:34.231021+0000 mgr.x (mgr.14150) 329 : cluster [DBG] pgmap v275: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:36.391 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-08T23:23:36.393 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-08T23:23:36.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:35 vm02 bash[17457]: cluster 2026-03-08T23:23:34.231021+0000 mgr.x (mgr.14150) 329 : cluster [DBG] pgmap v275: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:36.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:35 vm02 bash[17457]: cluster 2026-03-08T23:23:34.231021+0000 mgr.x (mgr.14150) 329 : cluster [DBG] pgmap v275: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:36.398 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-08T23:23:36.399 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-08T23:23:36.402 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-rgw. 2026-03-08T23:23:36.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:35 vm10 bash[20034]: cluster 2026-03-08T23:23:34.231021+0000 mgr.x (mgr.14150) 329 : cluster [DBG] pgmap v275: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:36.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:35 vm10 bash[20034]: cluster 2026-03-08T23:23:34.231021+0000 mgr.x (mgr.14150) 329 : cluster [DBG] pgmap v275: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:36.408 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:36.409 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:36.415 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-webob. 2026-03-08T23:23:36.421 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-08T23:23:36.422 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-08T23:23:36.425 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-08T23:23:36.427 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-08T23:23:36.427 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-08T23:23:36.429 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-08T23:23:36.433 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-08T23:23:36.434 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-08T23:23:36.444 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-waitress. 2026-03-08T23:23:36.449 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-08T23:23:36.450 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libradosstriper1. 2026-03-08T23:23:36.451 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-08T23:23:36.453 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-08T23:23:36.455 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:36.455 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-08T23:23:36.456 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:36.456 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-08T23:23:36.469 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-tempita. 2026-03-08T23:23:36.473 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-08T23:23:36.473 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-08T23:23:36.474 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-08T23:23:36.479 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-08T23:23:36.479 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-08T23:23:36.484 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-common. 2026-03-08T23:23:36.484 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:36.485 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:36.488 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-paste. 2026-03-08T23:23:36.493 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-08T23:23:36.494 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-08T23:23:36.500 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-08T23:23:36.505 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-08T23:23:36.509 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:36.535 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-08T23:23:36.540 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-08T23:23:36.541 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-08T23:23:36.547 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-08T23:23:36.552 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-08T23:23:36.553 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:36.557 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-08T23:23:36.562 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-08T23:23:36.563 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-08T23:23:36.573 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-08T23:23:36.579 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-08T23:23:36.580 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:36.581 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-webtest. 2026-03-08T23:23:36.588 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-08T23:23:36.589 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-08T23:23:36.607 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-08T23:23:36.607 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-pecan. 2026-03-08T23:23:36.613 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-08T23:23:36.614 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-08T23:23:36.614 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-08T23:23:36.615 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-08T23:23:36.643 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:36.646 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-08T23:23:36.649 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-08T23:23:36.654 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-08T23:23:36.655 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-08T23:23:36.701 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-08T23:23:36.707 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:36.708 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:36.723 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:36.726 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-08T23:23:36.755 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-08T23:23:36.761 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:36.763 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:36.802 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-mgr. 2026-03-08T23:23:36.807 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:36.808 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:36.809 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libnbd0. 2026-03-08T23:23:36.816 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-08T23:23:36.817 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-08T23:23:36.833 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libcephfs2. 2026-03-08T23:23:36.839 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:36.841 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:36.856 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-mon. 2026-03-08T23:23:36.862 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:36.863 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:36.868 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-rados. 2026-03-08T23:23:36.875 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:37.039 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:37.084 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-08T23:23:37.085 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-08T23:23:37.090 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-08T23:23:37.092 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:37.093 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-08T23:23:37.093 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:37.094 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-base. 2026-03-08T23:23:37.101 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:37.106 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:37.120 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-cephfs. 2026-03-08T23:23:37.126 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:37.128 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:37.129 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-osd. 2026-03-08T23:23:37.135 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:37.138 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:37.155 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-08T23:23:37.163 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:37.163 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:37.192 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-08T23:23:37.197 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-08T23:23:37.230 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-08T23:23:37.263 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-prettytable. 2026-03-08T23:23:37.264 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-08T23:23:37.270 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-08T23:23:37.270 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-08T23:23:37.275 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-08T23:23:37.276 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-08T23:23:37.301 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-rbd. 2026-03-08T23:23:37.305 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-cheroot. 2026-03-08T23:23:37.307 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:37.309 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:37.312 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-08T23:23:37.313 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-08T23:23:37.343 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-08T23:23:37.349 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-08T23:23:37.350 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-08T23:23:37.351 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-08T23:23:37.359 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-08T23:23:37.361 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-08T23:23:37.378 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-08T23:23:37.384 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-08T23:23:37.385 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-08T23:23:37.397 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-08T23:23:37.403 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-08T23:23:37.404 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-08T23:23:37.415 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-08T23:23:37.421 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-08T23:23:37.424 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-08T23:23:37.437 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-08T23:23:37.443 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-08T23:23:37.450 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-08T23:23:37.452 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package lua5.1. 2026-03-08T23:23:37.458 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-08T23:23:37.666 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-08T23:23:37.676 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-tempora. 2026-03-08T23:23:37.682 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-08T23:23:37.684 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-08T23:23:37.684 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph. 2026-03-08T23:23:37.690 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:37.693 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:37.694 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package lua-any. 2026-03-08T23:23:37.699 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-08T23:23:37.700 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-08T23:23:37.717 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-portend. 2026-03-08T23:23:37.719 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package zip. 2026-03-08T23:23:37.724 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-08T23:23:37.725 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-08T23:23:37.727 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking zip (3.0-12build2) ... 2026-03-08T23:23:37.727 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-08T23:23:37.728 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-fuse. 2026-03-08T23:23:37.734 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:37.736 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:37.755 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-08T23:23:37.756 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package unzip. 2026-03-08T23:23:37.761 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-08T23:23:37.762 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-08T23:23:37.768 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-08T23:23:37.768 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-08T23:23:37.795 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-mds. 2026-03-08T23:23:37.800 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:37.808 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:37.808 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-08T23:23:37.809 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package luarocks. 2026-03-08T23:23:37.814 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-08T23:23:37.815 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-08T23:23:37.815 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-08T23:23:37.816 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-08T23:23:37.878 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-natsort. 2026-03-08T23:23:37.884 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package cephadm. 2026-03-08T23:23:37.884 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-08T23:23:37.889 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:37.891 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:37.891 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-08T23:23:37.893 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package librgw2. 2026-03-08T23:23:37.898 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:37.901 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:37.929 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-08T23:23:37.930 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-logutils. 2026-03-08T23:23:37.935 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-08T23:23:37.936 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-08T23:23:37.941 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-08T23:23:37.942 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-08T23:23:38.014 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-mako. 2026-03-08T23:23:38.019 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-08T23:23:38.021 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-08T23:23:38.032 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-rgw. 2026-03-08T23:23:38.033 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-08T23:23:38.038 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:38.040 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:38.040 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:38.045 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:38.061 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-08T23:23:38.068 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-08T23:23:38.069 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-08T23:23:38.081 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-08T23:23:38.084 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-08T23:23:38.090 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-08T23:23:38.092 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-08T23:23:38.092 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-08T23:23:38.097 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-08T23:23:38.098 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-08T23:23:38.099 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-08T23:23:38.099 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-08T23:23:38.121 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libradosstriper1. 2026-03-08T23:23:38.124 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:38.126 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:38.127 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-webob. 2026-03-08T23:23:38.133 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-routes. 2026-03-08T23:23:38.133 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-08T23:23:38.136 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-08T23:23:38.138 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-08T23:23:38.146 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-08T23:23:38.167 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-common. 2026-03-08T23:23:38.168 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-waitress. 2026-03-08T23:23:38.173 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:38.173 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-08T23:23:38.174 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:38.175 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-08T23:23:38.186 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-08T23:23:38.191 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:38.192 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:38.208 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-tempita. 2026-03-08T23:23:38.214 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-08T23:23:38.217 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-08T23:23:38.249 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-paste. 2026-03-08T23:23:38.255 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-08T23:23:38.256 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-08T23:23:38.292 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-08T23:23:38.299 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-08T23:23:38.300 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-08T23:23:38.318 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-08T23:23:38.324 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-08T23:23:38.325 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-08T23:23:38.345 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-webtest. 2026-03-08T23:23:38.351 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-08T23:23:38.352 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-08T23:23:38.374 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pecan. 2026-03-08T23:23:38.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:38 vm04 bash[19918]: cluster 2026-03-08T23:23:36.231315+0000 mgr.x (mgr.14150) 330 : cluster [DBG] pgmap v276: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:38.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:38 vm04 bash[19918]: cluster 2026-03-08T23:23:36.231315+0000 mgr.x (mgr.14150) 330 : cluster [DBG] pgmap v276: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:38.380 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-08T23:23:38.382 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-08T23:23:38.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:38 vm02 bash[17457]: cluster 2026-03-08T23:23:36.231315+0000 mgr.x (mgr.14150) 330 : cluster [DBG] pgmap v276: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:38.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:38 vm02 bash[17457]: cluster 2026-03-08T23:23:36.231315+0000 mgr.x (mgr.14150) 330 : cluster [DBG] pgmap v276: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:38.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:38 vm10 bash[20034]: cluster 2026-03-08T23:23:36.231315+0000 mgr.x (mgr.14150) 330 : cluster [DBG] pgmap v276: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:38.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:38 vm10 bash[20034]: cluster 2026-03-08T23:23:36.231315+0000 mgr.x (mgr.14150) 330 : cluster [DBG] pgmap v276: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:38.422 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-08T23:23:38.429 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-08T23:23:38.495 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-08T23:23:38.735 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-08T23:23:38.737 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-base. 2026-03-08T23:23:38.741 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:38.741 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:38.743 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:38.748 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:38.751 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-08T23:23:38.757 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-08T23:23:38.758 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-08T23:23:38.807 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-08T23:23:38.815 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:38.815 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:38.850 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-joblib. 2026-03-08T23:23:38.855 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-08T23:23:38.856 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-08T23:23:38.861 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr. 2026-03-08T23:23:38.862 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-08T23:23:38.867 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:38.868 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-08T23:23:38.868 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:38.869 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-08T23:23:38.888 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-cheroot. 2026-03-08T23:23:38.894 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-08T23:23:38.894 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-08T23:23:38.895 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-08T23:23:38.901 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-08T23:23:38.901 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mon. 2026-03-08T23:23:38.902 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-08T23:23:38.907 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:38.908 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:38.914 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-08T23:23:38.917 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-sklearn. 2026-03-08T23:23:38.921 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-08T23:23:38.922 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-08T23:23:38.924 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-08T23:23:38.925 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-08T23:23:38.937 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-08T23:23:38.943 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-08T23:23:38.946 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-08T23:23:38.959 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-08T23:23:38.964 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-08T23:23:38.965 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-08T23:23:39.030 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-tempora. 2026-03-08T23:23:39.037 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-08T23:23:39.037 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-08T23:23:39.038 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-08T23:23:39.038 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-08T23:23:39.039 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-08T23:23:39.056 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-portend. 2026-03-08T23:23:39.059 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-osd. 2026-03-08T23:23:39.066 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-08T23:23:39.066 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-08T23:23:39.069 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:39.070 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:39.070 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-08T23:23:39.077 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:39.078 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:39.083 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-08T23:23:39.090 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-08T23:23:39.091 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-08T23:23:39.108 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-08T23:23:39.113 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-08T23:23:39.114 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-08T23:23:39.146 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-natsort. 2026-03-08T23:23:39.151 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-08T23:23:39.152 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-08T23:23:39.169 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-logutils. 2026-03-08T23:23:39.175 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-08T23:23:39.176 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-08T23:23:39.195 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-mako. 2026-03-08T23:23:39.201 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-08T23:23:39.202 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-08T23:23:39.223 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-08T23:23:39.228 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-08T23:23:39.229 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-08T23:23:39.247 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-08T23:23:39.252 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-08T23:23:39.253 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-08T23:23:39.268 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-webob. 2026-03-08T23:23:39.274 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-08T23:23:39.275 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-08T23:23:39.301 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-waitress. 2026-03-08T23:23:39.307 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-08T23:23:39.309 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-08T23:23:39.328 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-tempita. 2026-03-08T23:23:39.333 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-08T23:23:39.534 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-08T23:23:39.547 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph. 2026-03-08T23:23:39.549 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-paste. 2026-03-08T23:23:39.552 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:39.552 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:39.553 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-cachetools. 2026-03-08T23:23:39.555 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-08T23:23:39.555 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-08T23:23:39.556 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-08T23:23:39.556 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-08T23:23:39.565 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-fuse. 2026-03-08T23:23:39.570 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:39.571 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:39.573 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-rsa. 2026-03-08T23:23:39.577 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-08T23:23:39.578 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-08T23:23:39.590 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-08T23:23:39.596 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-08T23:23:39.597 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-08T23:23:39.601 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-google-auth. 2026-03-08T23:23:39.602 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mds. 2026-03-08T23:23:39.606 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-08T23:23:39.607 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-08T23:23:39.608 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:39.609 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:39.613 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-08T23:23:39.618 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-08T23:23:39.619 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-08T23:23:39.633 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-08T23:23:39.636 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-08T23:23:39.636 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-08T23:23:39.637 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-webtest. 2026-03-08T23:23:39.642 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-08T23:23:39.652 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-08T23:23:39.663 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package cephadm. 2026-03-08T23:23:39.667 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-websocket. 2026-03-08T23:23:39.668 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:39.668 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-pecan. 2026-03-08T23:23:39.668 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:39.672 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-08T23:23:39.673 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-08T23:23:39.674 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-08T23:23:39.675 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-08T23:23:39.685 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-08T23:23:39.691 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-08T23:23:39.692 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-08T23:23:39.693 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-08T23:23:39.698 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-08T23:23:39.709 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-08T23:23:39.713 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-08T23:23:39.715 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-08T23:23:39.716 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-08T23:23:39.723 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-08T23:23:39.729 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:39.730 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:39.751 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-08T23:23:39.757 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:39.757 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-08T23:23:39.758 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:39.763 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-08T23:23:39.764 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-08T23:23:39.783 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-routes. 2026-03-08T23:23:39.789 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-08T23:23:39.790 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-08T23:23:39.796 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-08T23:23:39.801 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:39.802 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:39.842 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-08T23:23:39.843 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-mgr. 2026-03-08T23:23:39.848 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:39.848 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:39.849 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:39.849 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:39.888 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-08T23:23:39.889 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-mon. 2026-03-08T23:23:39.894 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:39.895 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:39.895 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:39.896 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:39.914 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-08T23:23:39.920 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-08T23:23:39.921 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-08T23:23:39.944 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-08T23:23:39.951 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-08T23:23:39.997 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-08T23:23:40.108 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package jq. 2026-03-08T23:23:40.115 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-08T23:23:40.133 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-08T23:23:40.133 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-08T23:23:40.138 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-08T23:23:40.139 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-08T23:23:40.146 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package socat. 2026-03-08T23:23:40.152 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-08T23:23:40.152 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-08T23:23:40.165 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-osd. 2026-03-08T23:23:40.170 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:40.172 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:40.179 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package xmlstarlet. 2026-03-08T23:23:40.185 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-08T23:23:40.187 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-08T23:23:40.245 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-test. 2026-03-08T23:23:40.251 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:40.253 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:40.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:40 vm02 bash[17457]: cluster 2026-03-08T23:23:38.231588+0000 mgr.x (mgr.14150) 331 : cluster [DBG] pgmap v277: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:40.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:40 vm02 bash[17457]: cluster 2026-03-08T23:23:38.231588+0000 mgr.x (mgr.14150) 331 : cluster [DBG] pgmap v277: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:40.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:40 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:23:40.396 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-08T23:23:40.400 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-08T23:23:40.401 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-08T23:23:40.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:40 vm10 bash[20034]: cluster 2026-03-08T23:23:38.231588+0000 mgr.x (mgr.14150) 331 : cluster [DBG] pgmap v277: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:40.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:40 vm10 bash[20034]: cluster 2026-03-08T23:23:38.231588+0000 mgr.x (mgr.14150) 331 : cluster [DBG] pgmap v277: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:40.480 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-joblib. 2026-03-08T23:23:40.489 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-08T23:23:40.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:40 vm04 bash[19918]: cluster 2026-03-08T23:23:38.231588+0000 mgr.x (mgr.14150) 331 : cluster [DBG] pgmap v277: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:40.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:40 vm04 bash[19918]: cluster 2026-03-08T23:23:38.231588+0000 mgr.x (mgr.14150) 331 : cluster [DBG] pgmap v277: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:40.689 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-08T23:23:40.719 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph. 2026-03-08T23:23:40.725 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:40.726 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:40.731 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-08T23:23:40.737 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-08T23:23:40.739 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-08T23:23:40.745 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-fuse. 2026-03-08T23:23:40.751 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:40.755 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:40.758 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-sklearn. 2026-03-08T23:23:40.764 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-08T23:23:40.765 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-08T23:23:40.795 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-mds. 2026-03-08T23:23:40.800 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:40.801 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:40.928 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package cephadm. 2026-03-08T23:23:40.933 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:40.938 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:40.956 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-08T23:23:40.959 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-08T23:23:40.962 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:40.963 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:40.965 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-08T23:23:40.966 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-08T23:23:41.005 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-08T23:23:41.012 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:41.013 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:41.051 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-08T23:23:41.056 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-08T23:23:41.061 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-08T23:23:41.089 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-routes. 2026-03-08T23:23:41.096 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-08T23:23:41.097 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-08T23:23:41.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:40 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:23:41.525 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-cachetools. 2026-03-08T23:23:41.525 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-08T23:23:41.526 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:41.527 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:41.528 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package ceph-volume. 2026-03-08T23:23:41.531 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-08T23:23:41.533 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-08T23:23:41.534 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:41.535 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:41.551 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-rsa. 2026-03-08T23:23:41.557 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-08T23:23:41.558 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-08T23:23:41.568 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-08T23:23:41.574 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:41.574 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:41.582 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-google-auth. 2026-03-08T23:23:41.588 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-08T23:23:41.589 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-08T23:23:41.597 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-08T23:23:41.603 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-08T23:23:41.604 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-08T23:23:41.621 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-08T23:23:41.628 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-08T23:23:41.629 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-08T23:23:41.636 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-08T23:23:41.643 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-08T23:23:41.644 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-08T23:23:41.650 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-websocket. 2026-03-08T23:23:41.657 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-08T23:23:41.658 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-08T23:23:41.667 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package nvme-cli. 2026-03-08T23:23:41.672 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-08T23:23:41.673 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-08T23:23:41.681 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-08T23:23:41.687 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-08T23:23:41.706 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-08T23:23:41.722 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package pkg-config. 2026-03-08T23:23:41.729 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-08T23:23:41.732 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-08T23:23:41.759 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-08T23:23:41.766 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-08T23:23:41.772 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-08T23:23:41.870 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-08T23:23:41.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:41 vm04 bash[19918]: audit 2026-03-08T23:23:40.011116+0000 mgr.x (mgr.14150) 332 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:41.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:41 vm04 bash[19918]: audit 2026-03-08T23:23:40.011116+0000 mgr.x (mgr.14150) 332 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:41.876 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-08T23:23:41.878 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-08T23:23:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:41 vm02 bash[17457]: audit 2026-03-08T23:23:40.011116+0000 mgr.x (mgr.14150) 332 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:41 vm02 bash[17457]: audit 2026-03-08T23:23:40.011116+0000 mgr.x (mgr.14150) 332 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:41.895 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-pastescript. 2026-03-08T23:23:41.900 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-08T23:23:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:41 vm10 bash[20034]: audit 2026-03-08T23:23:40.011116+0000 mgr.x (mgr.14150) 332 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:41 vm10 bash[20034]: audit 2026-03-08T23:23:40.011116+0000 mgr.x (mgr.14150) 332 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:41.959 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-08T23:23:42.053 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-08T23:23:42.057 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-pluggy. 2026-03-08T23:23:42.059 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:42.060 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:42.063 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-08T23:23:42.064 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-08T23:23:42.075 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-08T23:23:42.079 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-08T23:23:42.081 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-08T23:23:42.081 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-psutil. 2026-03-08T23:23:42.082 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-08T23:23:42.085 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-08T23:23:42.086 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-08T23:23:42.087 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-08T23:23:42.088 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-08T23:23:42.117 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-08T23:23:42.118 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-py. 2026-03-08T23:23:42.123 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-08T23:23:42.124 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-08T23:23:42.138 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-08T23:23:42.139 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-08T23:23:42.154 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-joblib. 2026-03-08T23:23:42.154 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package jq. 2026-03-08T23:23:42.160 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-08T23:23:42.161 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-08T23:23:42.161 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-08T23:23:42.162 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-08T23:23:42.169 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-pygments. 2026-03-08T23:23:42.172 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-08T23:23:42.173 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-08T23:23:42.176 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package socat. 2026-03-08T23:23:42.182 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-08T23:23:42.185 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-08T23:23:42.205 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-08T23:23:42.211 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-08T23:23:42.212 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-08T23:23:42.221 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package xmlstarlet. 2026-03-08T23:23:42.227 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-08T23:23:42.228 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-08T23:23:42.232 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-sklearn. 2026-03-08T23:23:42.238 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-08T23:23:42.239 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-08T23:23:42.245 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-08T23:23:42.251 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-08T23:23:42.252 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-08T23:23:42.271 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-toml. 2026-03-08T23:23:42.277 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-08T23:23:42.277 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-test. 2026-03-08T23:23:42.278 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-08T23:23:42.284 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:42.285 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:42.309 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-pytest. 2026-03-08T23:23:42.315 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-08T23:23:42.320 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-08T23:23:42.373 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-simplejson. 2026-03-08T23:23:42.379 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-08T23:23:42.380 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-08T23:23:42.399 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-08T23:23:42.400 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-08T23:23:42.405 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:42.407 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-08T23:23:42.408 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:42.409 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-08T23:23:43.017 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package radosgw. 2026-03-08T23:23:43.022 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:43.074 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:43.130 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-cachetools. 2026-03-08T23:23:43.138 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-08T23:23:43.139 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-08T23:23:43.149 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-volume. 2026-03-08T23:23:43.155 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:43.157 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:43.159 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-rsa. 2026-03-08T23:23:43.164 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-08T23:23:43.166 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-08T23:23:43.184 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-google-auth. 2026-03-08T23:23:43.188 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-08T23:23:43.189 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-08T23:23:43.190 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-08T23:23:43.194 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:43.195 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:43.210 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-08T23:23:43.211 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-08T23:23:43.215 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-08T23:23:43.216 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-08T23:23:43.216 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-08T23:23:43.218 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-08T23:23:43.233 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-websocket. 2026-03-08T23:23:43.237 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-08T23:23:43.239 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-08T23:23:43.246 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-08T23:23:43.252 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-08T23:23:43.253 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-08T23:23:43.330 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-08T23:23:43.337 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-08T23:23:43.345 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package rbd-fuse. 2026-03-08T23:23:43.345 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package nvme-cli. 2026-03-08T23:23:43.350 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:43.351 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-08T23:23:43.351 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:43.352 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-08T23:23:43.353 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-08T23:23:43.369 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package smartmontools. 2026-03-08T23:23:43.376 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-08T23:23:43.387 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-08T23:23:43.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:43 vm02 bash[17457]: cluster 2026-03-08T23:23:40.235181+0000 mgr.x (mgr.14150) 333 : cluster [DBG] pgmap v278: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:43.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:43 vm02 bash[17457]: cluster 2026-03-08T23:23:40.235181+0000 mgr.x (mgr.14150) 333 : cluster [DBG] pgmap v278: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:43.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:43 vm02 bash[17457]: audit 2026-03-08T23:23:40.954561+0000 mgr.x (mgr.14150) 334 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:43.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:43 vm02 bash[17457]: audit 2026-03-08T23:23:40.954561+0000 mgr.x (mgr.14150) 334 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:43.406 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package pkg-config. 2026-03-08T23:23:43.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:43 vm10 bash[20034]: cluster 2026-03-08T23:23:40.235181+0000 mgr.x (mgr.14150) 333 : cluster [DBG] pgmap v278: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:43.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:43 vm10 bash[20034]: cluster 2026-03-08T23:23:40.235181+0000 mgr.x (mgr.14150) 333 : cluster [DBG] pgmap v278: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:43.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:43 vm10 bash[20034]: audit 2026-03-08T23:23:40.954561+0000 mgr.x (mgr.14150) 334 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:43.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:43 vm10 bash[20034]: audit 2026-03-08T23:23:40.954561+0000 mgr.x (mgr.14150) 334 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:43.412 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-08T23:23:43.413 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-08T23:23:43.433 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-08T23:23:43.436 INFO:teuthology.orchestra.run.vm10.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-08T23:23:43.439 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-08T23:23:43.446 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-08T23:23:43.518 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-08T23:23:43.524 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-08T23:23:43.524 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-08T23:23:43.542 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pastescript. 2026-03-08T23:23:43.542 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-08T23:23:43.547 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:43.548 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:43.548 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-08T23:23:43.549 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-08T23:23:43.566 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-08T23:23:43.571 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pluggy. 2026-03-08T23:23:43.571 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-08T23:23:43.572 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-08T23:23:43.577 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-08T23:23:43.578 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-08T23:23:43.589 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-08T23:23:43.596 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-08T23:23:43.596 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-08T23:23:43.596 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-psutil. 2026-03-08T23:23:43.604 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-08T23:23:43.605 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-08T23:23:43.612 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package jq. 2026-03-08T23:23:43.618 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-08T23:23:43.618 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-08T23:23:43.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:43 vm04 bash[19918]: cluster 2026-03-08T23:23:40.235181+0000 mgr.x (mgr.14150) 333 : cluster [DBG] pgmap v278: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:43.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:43 vm04 bash[19918]: cluster 2026-03-08T23:23:40.235181+0000 mgr.x (mgr.14150) 333 : cluster [DBG] pgmap v278: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:43.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:43 vm04 bash[19918]: audit 2026-03-08T23:23:40.954561+0000 mgr.x (mgr.14150) 334 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:43.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:43 vm04 bash[19918]: audit 2026-03-08T23:23:40.954561+0000 mgr.x (mgr.14150) 334 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:43.628 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-py. 2026-03-08T23:23:43.633 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package socat. 2026-03-08T23:23:43.634 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-08T23:23:43.635 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-08T23:23:43.638 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-08T23:23:43.639 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-08T23:23:43.662 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package xmlstarlet. 2026-03-08T23:23:43.664 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pygments. 2026-03-08T23:23:43.668 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-08T23:23:43.668 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-08T23:23:43.671 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-08T23:23:43.672 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-08T23:23:43.700 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-08T23:23:43.700 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-08T23:23:43.715 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-test. 2026-03-08T23:23:43.721 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:43.723 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:43.748 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-08T23:23:43.754 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-08T23:23:43.754 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-08T23:23:43.772 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-toml. 2026-03-08T23:23:43.776 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-08T23:23:43.777 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-08T23:23:43.797 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pytest. 2026-03-08T23:23:43.802 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-08T23:23:43.803 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-08T23:23:43.835 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-simplejson. 2026-03-08T23:23:43.840 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:43 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:43.840 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:43 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:43.840 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:43 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:43.840 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:43 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:43.840 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:43 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:43.842 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-08T23:23:43.843 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-08T23:23:43.873 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-08T23:23:43.875 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-08T23:23:43.876 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-08T23:23:44.044 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package radosgw. 2026-03-08T23:23:44.044 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:44.045 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:44.114 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:43 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.114 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:44 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.114 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:43 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.114 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:44 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.114 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:43 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.114 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:44 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.115 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:43 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.115 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:44 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.115 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:43 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.115 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:44 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.117 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-08T23:23:44.446 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-08T23:23:44.449 INFO:teuthology.orchestra.run.vm10.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-08T23:23:44.460 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package rbd-fuse. 2026-03-08T23:23:44.466 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:44.467 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:44.511 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-08T23:23:44.656 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package smartmontools. 2026-03-08T23:23:44.661 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-08T23:23:44.665 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package ceph-volume. 2026-03-08T23:23:44.668 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T23:23:44.668 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:44.670 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-08T23:23:44.699 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-08T23:23:44.705 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:44.706 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:44.718 INFO:teuthology.orchestra.run.vm04.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-08T23:23:44.722 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-08T23:23:44.727 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-08T23:23:44.728 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-08T23:23:44.737 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-08T23:23:44.816 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-08T23:23:44.822 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-08T23:23:44.823 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-08T23:23:44.824 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:44 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.825 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:44 vm10 bash[20034]: cluster 2026-03-08T23:23:42.235613+0000 mgr.x (mgr.14150) 335 : cluster [DBG] pgmap v279: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:44.825 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:44 vm10 bash[20034]: cluster 2026-03-08T23:23:42.235613+0000 mgr.x (mgr.14150) 335 : cluster [DBG] pgmap v279: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:44.825 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:44 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.825 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:44 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.825 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:44 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.825 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:44 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.844 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package nvme-cli. 2026-03-08T23:23:44.850 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-08T23:23:44.851 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-08T23:23:44.893 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package pkg-config. 2026-03-08T23:23:44.899 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-08T23:23:44.900 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-08T23:23:44.915 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-08T23:23:44.920 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-08T23:23:44.921 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-08T23:23:44.968 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-08T23:23:44.973 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-08T23:23:44.974 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-08T23:23:44.980 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-08T23:23:44.980 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:44 vm04 bash[19918]: cluster 2026-03-08T23:23:42.235613+0000 mgr.x (mgr.14150) 335 : cluster [DBG] pgmap v279: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:44.980 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:44 vm04 bash[19918]: cluster 2026-03-08T23:23:42.235613+0000 mgr.x (mgr.14150) 335 : cluster [DBG] pgmap v279: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:44.980 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:44 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.980 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:44 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.980 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:44 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.980 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:44 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:44.981 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-08T23:23:44.988 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-pastescript. 2026-03-08T23:23:44.994 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-08T23:23:44.995 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-08T23:23:45.016 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-pluggy. 2026-03-08T23:23:45.022 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-08T23:23:45.023 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-08T23:23:45.039 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-psutil. 2026-03-08T23:23:45.044 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-08T23:23:45.045 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-08T23:23:45.068 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-py. 2026-03-08T23:23:45.074 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-08T23:23:45.075 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-08T23:23:45.097 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-pygments. 2026-03-08T23:23:45.103 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-08T23:23:45.104 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-08T23:23:45.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:44 vm02 bash[17457]: cluster 2026-03-08T23:23:42.235613+0000 mgr.x (mgr.14150) 335 : cluster [DBG] pgmap v279: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:45.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:44 vm02 bash[17457]: cluster 2026-03-08T23:23:42.235613+0000 mgr.x (mgr.14150) 335 : cluster [DBG] pgmap v279: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:45.149 INFO:teuthology.orchestra.run.vm10.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-08T23:23:45.151 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:44 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.151 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:45 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.151 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:44 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.151 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:45 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.151 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:44 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.151 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:45 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.151 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:44 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.151 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:45 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.151 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:44 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.151 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:45 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.156 INFO:teuthology.orchestra.run.vm10.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-08T23:23:45.157 INFO:teuthology.orchestra.run.vm10.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:45.173 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-08T23:23:45.180 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-08T23:23:45.181 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-08T23:23:45.198 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-toml. 2026-03-08T23:23:45.200 INFO:teuthology.orchestra.run.vm10.stdout:Adding system user cephadm....done 2026-03-08T23:23:45.203 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-08T23:23:45.204 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-08T23:23:45.209 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-08T23:23:45.220 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-pytest. 2026-03-08T23:23:45.227 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-08T23:23:45.228 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-08T23:23:45.257 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-simplejson. 2026-03-08T23:23:45.263 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-08T23:23:45.264 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-08T23:23:45.280 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:45 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.280 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:45 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.280 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:45 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.280 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:45 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.287 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-08T23:23:45.290 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-08T23:23:45.293 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-08T23:23:45.294 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-08T23:23:45.356 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-08T23:23:45.359 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-08T23:23:45.365 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-08T23:23:45.426 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-08T23:23:45.431 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-08T23:23:45.434 INFO:teuthology.orchestra.run.vm04.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-08T23:23:45.435 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package radosgw. 2026-03-08T23:23:45.441 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:45.442 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:45.495 INFO:teuthology.orchestra.run.vm10.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-08T23:23:45.497 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-08T23:23:45.497 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-08T23:23:45.592 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-08T23:23:45.593 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:45 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.593 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:45 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.593 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:45 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.593 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:45 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.719 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-08T23:23:45.721 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-08T23:23:45.723 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package rbd-fuse. 2026-03-08T23:23:45.728 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T23:23:45.729 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:45.745 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package smartmontools. 2026-03-08T23:23:45.750 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-08T23:23:45.759 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-08T23:23:45.791 INFO:teuthology.orchestra.run.vm10.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-08T23:23:45.799 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-08T23:23:45.807 INFO:teuthology.orchestra.run.vm02.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-08T23:23:45.848 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:45 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.848 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:45 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.848 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:45 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.848 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:45 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:45.848 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:45 vm04 bash[19918]: cluster 2026-03-08T23:23:44.236205+0000 mgr.x (mgr.14150) 336 : cluster [DBG] pgmap v280: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:45.848 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:45 vm04 bash[19918]: cluster 2026-03-08T23:23:44.236205+0000 mgr.x (mgr.14150) 336 : cluster [DBG] pgmap v280: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:45.870 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-08T23:23:45.938 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:46.012 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-08T23:23:46.014 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-08T23:23:46.017 INFO:teuthology.orchestra.run.vm10.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-08T23:23:46.019 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-08T23:23:46.021 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-08T23:23:46.024 INFO:teuthology.orchestra.run.vm10.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-08T23:23:46.028 INFO:teuthology.orchestra.run.vm10.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-08T23:23:46.030 INFO:teuthology.orchestra.run.vm10.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-08T23:23:46.032 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-08T23:23:46.035 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-08T23:23:46.074 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-08T23:23:46.074 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:45 vm02 bash[17457]: cluster 2026-03-08T23:23:44.236205+0000 mgr.x (mgr.14150) 336 : cluster [DBG] pgmap v280: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:46.074 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:45 vm02 bash[17457]: cluster 2026-03-08T23:23:44.236205+0000 mgr.x (mgr.14150) 336 : cluster [DBG] pgmap v280: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:46.074 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:45 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.074 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:45 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.074 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:45 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.075 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:45 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.075 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:45 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.075 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-08T23:23:46.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:45 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:46 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.124 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:45 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.124 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:46 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.124 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:45 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.124 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:46 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.124 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:45 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.124 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:46 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.136 INFO:teuthology.orchestra.run.vm04.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-08T23:23:46.144 INFO:teuthology.orchestra.run.vm04.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-08T23:23:46.146 INFO:teuthology.orchestra.run.vm04.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:46.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:45 vm10 bash[20034]: cluster 2026-03-08T23:23:44.236205+0000 mgr.x (mgr.14150) 336 : cluster [DBG] pgmap v280: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:46.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:45 vm10 bash[20034]: cluster 2026-03-08T23:23:44.236205+0000 mgr.x (mgr.14150) 336 : cluster [DBG] pgmap v280: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:46.163 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-08T23:23:46.190 INFO:teuthology.orchestra.run.vm04.stdout:Adding system user cephadm....done 2026-03-08T23:23:46.198 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-08T23:23:46.238 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-08T23:23:46.275 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-08T23:23:46.310 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-08T23:23:46.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.332 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.332 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.332 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.332 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.339 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-08T23:23:46.341 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-08T23:23:46.392 INFO:teuthology.orchestra.run.vm10.stdout:Setting up zip (3.0-12build2) ... 2026-03-08T23:23:46.394 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-08T23:23:46.406 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-08T23:23:46.416 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-08T23:23:46.481 INFO:teuthology.orchestra.run.vm04.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-08T23:23:46.483 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-08T23:23:46.484 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-08T23:23:46.485 INFO:teuthology.orchestra.run.vm02.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-08T23:23:46.550 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-08T23:23:46.578 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-08T23:23:46.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.644 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.644 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.644 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.644 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.678 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-08T23:23:46.698 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-08T23:23:46.751 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-08T23:23:46.753 INFO:teuthology.orchestra.run.vm10.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-08T23:23:46.756 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-08T23:23:46.764 INFO:teuthology.orchestra.run.vm04.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-08T23:23:46.772 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-08T23:23:46.827 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-08T23:23:46.841 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-08T23:23:46.861 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-08T23:23:46.912 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:46.939 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.939 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.939 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.939 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.939 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:46.984 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-08T23:23:46.986 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-08T23:23:46.989 INFO:teuthology.orchestra.run.vm04.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-08T23:23:46.991 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-08T23:23:46.993 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-08T23:23:46.996 INFO:teuthology.orchestra.run.vm04.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-08T23:23:46.999 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-08T23:23:47.000 INFO:teuthology.orchestra.run.vm04.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-08T23:23:47.002 INFO:teuthology.orchestra.run.vm04.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-08T23:23:47.003 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-08T23:23:47.006 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-08T23:23:47.121 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-08T23:23:47.124 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-08T23:23:47.193 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-08T23:23:47.206 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-08T23:23:47.251 INFO:teuthology.orchestra.run.vm02.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-08T23:23:47.253 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:47.253 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:47 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:47.253 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:47.253 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:47 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:47.253 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:47.253 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:47 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:47.253 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:47.253 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:47 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:47.253 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:46 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:47.253 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:47 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:47.257 INFO:teuthology.orchestra.run.vm02.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-08T23:23:47.259 INFO:teuthology.orchestra.run.vm02.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:47.265 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-08T23:23:47.301 INFO:teuthology.orchestra.run.vm02.stdout:Adding system user cephadm....done 2026-03-08T23:23:47.309 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-08T23:23:47.327 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-08T23:23:47.345 INFO:teuthology.orchestra.run.vm04.stdout:Setting up zip (3.0-12build2) ... 2026-03-08T23:23:47.348 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-08T23:23:47.380 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-08T23:23:47.395 INFO:teuthology.orchestra.run.vm10.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-08T23:23:47.397 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:47.443 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-08T23:23:47.445 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-08T23:23:47.484 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-08T23:23:47.508 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-08T23:23:47.576 INFO:teuthology.orchestra.run.vm02.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-08T23:23:47.579 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-08T23:23:47.624 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-08T23:23:47.670 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-08T23:23:47.694 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-08T23:23:47.696 INFO:teuthology.orchestra.run.vm04.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-08T23:23:47.699 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-08T23:23:47.809 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-08T23:23:47.812 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-08T23:23:47.887 INFO:teuthology.orchestra.run.vm02.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-08T23:23:47.895 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-08T23:23:47.959 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-08T23:23:47.967 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-08T23:23:48.035 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:48.053 INFO:teuthology.orchestra.run.vm10.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-08T23:23:48.075 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:48.080 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-08T23:23:48.088 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-08T23:23:48.112 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-08T23:23:48.114 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-08T23:23:48.117 INFO:teuthology.orchestra.run.vm02.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-08T23:23:48.123 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-08T23:23:48.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:47 vm04 bash[19918]: cluster 2026-03-08T23:23:46.236626+0000 mgr.x (mgr.14150) 337 : cluster [DBG] pgmap v281: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:48.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:47 vm04 bash[19918]: cluster 2026-03-08T23:23:46.236626+0000 mgr.x (mgr.14150) 337 : cluster [DBG] pgmap v281: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:48.125 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-08T23:23:48.127 INFO:teuthology.orchestra.run.vm02.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-08T23:23:48.132 INFO:teuthology.orchestra.run.vm02.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-08T23:23:48.134 INFO:teuthology.orchestra.run.vm02.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-08T23:23:48.136 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-08T23:23:48.138 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-08T23:23:48.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:47 vm02 bash[17457]: cluster 2026-03-08T23:23:46.236626+0000 mgr.x (mgr.14150) 337 : cluster [DBG] pgmap v281: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:48.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:47 vm02 bash[17457]: cluster 2026-03-08T23:23:46.236626+0000 mgr.x (mgr.14150) 337 : cluster [DBG] pgmap v281: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:48.151 INFO:teuthology.orchestra.run.vm10.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-08T23:23:48.153 INFO:teuthology.orchestra.run.vm10.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-08T23:23:48.156 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-08T23:23:48.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:47 vm10 bash[20034]: cluster 2026-03-08T23:23:46.236626+0000 mgr.x (mgr.14150) 337 : cluster [DBG] pgmap v281: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:48.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:47 vm10 bash[20034]: cluster 2026-03-08T23:23:46.236626+0000 mgr.x (mgr.14150) 337 : cluster [DBG] pgmap v281: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:48.176 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-08T23:23:48.238 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-08T23:23:48.272 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-08T23:23:48.295 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-08T23:23:48.306 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:48.308 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-08T23:23:48.350 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-08T23:23:48.363 INFO:teuthology.orchestra.run.vm04.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-08T23:23:48.365 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:48.380 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-08T23:23:48.430 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-08T23:23:48.455 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-08T23:23:48.468 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-08T23:23:48.520 INFO:teuthology.orchestra.run.vm02.stdout:Setting up zip (3.0-12build2) ... 2026-03-08T23:23:48.522 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-08T23:23:48.526 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-08T23:23:48.595 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-08T23:23:48.660 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-08T23:23:48.740 INFO:teuthology.orchestra.run.vm10.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-08T23:23:48.742 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-08T23:23:48.806 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-08T23:23:48.823 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-08T23:23:48.828 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-08T23:23:48.879 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-08T23:23:48.881 INFO:teuthology.orchestra.run.vm02.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-08T23:23:48.884 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-08T23:23:48.899 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-08T23:23:48.981 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-08T23:23:48.989 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-08T23:23:49.044 INFO:teuthology.orchestra.run.vm04.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-08T23:23:49.066 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:49.071 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-08T23:23:49.082 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-08T23:23:49.122 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-08T23:23:49.144 INFO:teuthology.orchestra.run.vm04.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-08T23:23:49.146 INFO:teuthology.orchestra.run.vm04.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-08T23:23:49.148 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-08T23:23:49.149 INFO:teuthology.orchestra.run.vm10.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-08T23:23:49.152 INFO:teuthology.orchestra.run.vm10.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-08T23:23:49.154 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-08T23:23:49.157 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-08T23:23:49.218 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-08T23:23:49.257 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-08T23:23:49.283 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:49.287 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-08T23:23:49.294 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-08T23:23:49.353 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-08T23:23:49.365 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-08T23:23:49.372 INFO:teuthology.orchestra.run.vm10.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-08T23:23:49.375 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-08T23:23:49.443 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-08T23:23:49.444 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:49.446 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-08T23:23:49.480 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-08T23:23:49.521 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-08T23:23:49.533 INFO:teuthology.orchestra.run.vm10.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-08T23:23:49.536 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-08T23:23:49.551 INFO:teuthology.orchestra.run.vm02.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-08T23:23:49.554 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:49.593 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-08T23:23:49.614 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-08T23:23:49.643 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-08T23:23:49.657 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-08T23:23:49.736 INFO:teuthology.orchestra.run.vm04.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-08T23:23:49.738 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-08T23:23:49.758 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-08T23:23:49.818 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-08T23:23:49.820 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-08T23:23:49.847 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-08T23:23:49.894 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-08T23:23:49.978 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-08T23:23:49.981 INFO:teuthology.orchestra.run.vm10.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:49.983 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:49.983 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-08T23:23:49.986 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-08T23:23:50.019 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:49 vm02 bash[17457]: cluster 2026-03-08T23:23:48.236894+0000 mgr.x (mgr.14150) 338 : cluster [DBG] pgmap v282: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:50.019 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:49 vm02 bash[17457]: cluster 2026-03-08T23:23:48.236894+0000 mgr.x (mgr.14150) 338 : cluster [DBG] pgmap v282: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:50.019 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:50 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:23:50.091 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-08T23:23:50.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:49 vm04 bash[19918]: cluster 2026-03-08T23:23:48.236894+0000 mgr.x (mgr.14150) 338 : cluster [DBG] pgmap v282: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:50.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:49 vm04 bash[19918]: cluster 2026-03-08T23:23:48.236894+0000 mgr.x (mgr.14150) 338 : cluster [DBG] pgmap v282: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:50.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:49 vm10 bash[20034]: cluster 2026-03-08T23:23:48.236894+0000 mgr.x (mgr.14150) 338 : cluster [DBG] pgmap v282: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:50.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:49 vm10 bash[20034]: cluster 2026-03-08T23:23:48.236894+0000 mgr.x (mgr.14150) 338 : cluster [DBG] pgmap v282: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:50.159 INFO:teuthology.orchestra.run.vm04.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-08T23:23:50.161 INFO:teuthology.orchestra.run.vm04.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-08T23:23:50.164 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-08T23:23:50.166 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-08T23:23:50.260 INFO:teuthology.orchestra.run.vm02.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-08T23:23:50.285 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:50.290 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-08T23:23:50.303 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-08T23:23:50.366 INFO:teuthology.orchestra.run.vm02.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-08T23:23:50.368 INFO:teuthology.orchestra.run.vm02.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-08T23:23:50.369 INFO:teuthology.orchestra.run.vm04.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-08T23:23:50.371 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-08T23:23:50.371 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-08T23:23:50.435 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:50.438 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-08T23:23:50.440 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-08T23:23:50.508 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:50.510 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-08T23:23:50.519 INFO:teuthology.orchestra.run.vm04.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-08T23:23:50.521 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-08T23:23:50.586 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-08T23:23:50.600 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-08T23:23:50.636 INFO:teuthology.orchestra.run.vm10.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-08T23:23:50.644 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:50.646 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:50.648 INFO:teuthology.orchestra.run.vm10.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:50.651 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:50.653 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:50.656 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-08T23:23:50.714 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-08T23:23:50.714 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-08T23:23:50.723 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-08T23:23:50.742 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-08T23:23:50.793 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-08T23:23:50.827 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-08T23:23:50.861 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-08T23:23:50.932 INFO:teuthology.orchestra.run.vm02.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-08T23:23:50.934 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-08T23:23:50.937 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-08T23:23:50.940 INFO:teuthology.orchestra.run.vm04.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:50.942 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:50.944 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-08T23:23:50.965 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:50 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:50.965 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:50 vm10 bash[20034]: audit 2026-03-08T23:23:50.019100+0000 mgr.x (mgr.14150) 339 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:50.965 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:50 vm10 bash[20034]: audit 2026-03-08T23:23:50.019100+0000 mgr.x (mgr.14150) 339 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:50.965 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:50 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:50.965 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:50 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:50.965 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:50 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:50.965 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:50 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.016 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-08T23:23:51.019 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-08T23:23:51.093 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-08T23:23:51.093 INFO:teuthology.orchestra.run.vm10.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:51.095 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:51.097 INFO:teuthology.orchestra.run.vm10.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:51.099 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:51.101 INFO:teuthology.orchestra.run.vm10.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:51.103 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:51.105 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:51.107 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:51.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:50 vm04 bash[19918]: audit 2026-03-08T23:23:50.019100+0000 mgr.x (mgr.14150) 339 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:51.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:50 vm04 bash[19918]: audit 2026-03-08T23:23:50.019100+0000 mgr.x (mgr.14150) 339 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:51.139 INFO:teuthology.orchestra.run.vm10.stdout:Adding group ceph....done 2026-03-08T23:23:51.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:50 vm02 bash[17457]: audit 2026-03-08T23:23:50.019100+0000 mgr.x (mgr.14150) 339 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:51.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:50 vm02 bash[17457]: audit 2026-03-08T23:23:50.019100+0000 mgr.x (mgr.14150) 339 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:51.175 INFO:teuthology.orchestra.run.vm10.stdout:Adding system user ceph....done 2026-03-08T23:23:51.182 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-08T23:23:51.184 INFO:teuthology.orchestra.run.vm10.stdout:Setting system user ceph properties....done 2026-03-08T23:23:51.188 INFO:teuthology.orchestra.run.vm10.stdout:Fixing /var/run/ceph ownership....done 2026-03-08T23:23:51.192 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.192 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.192 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.192 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.192 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:50 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:23:51.192 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.275 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-08T23:23:51.342 INFO:teuthology.orchestra.run.vm02.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-08T23:23:51.345 INFO:teuthology.orchestra.run.vm02.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-08T23:23:51.347 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-08T23:23:51.349 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-08T23:23:51.492 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-08T23:23:51.527 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.527 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.527 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.527 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.527 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.527 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-08T23:23:51.558 INFO:teuthology.orchestra.run.vm04.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-08T23:23:51.563 INFO:teuthology.orchestra.run.vm02.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-08T23:23:51.565 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-08T23:23:51.565 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:51.568 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:51.570 INFO:teuthology.orchestra.run.vm04.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:51.572 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:51.575 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:51.628 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:23:51.631 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-08T23:23:51.641 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-08T23:23:51.641 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-08T23:23:51.708 INFO:teuthology.orchestra.run.vm02.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-08T23:23:51.710 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-08T23:23:51.792 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-08T23:23:51.856 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.856 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:51 vm10 bash[20034]: cluster 2026-03-08T23:23:50.237720+0000 mgr.x (mgr.14150) 340 : cluster [DBG] pgmap v283: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:51.856 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:51 vm10 bash[20034]: cluster 2026-03-08T23:23:50.237720+0000 mgr.x (mgr.14150) 340 : cluster [DBG] pgmap v283: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:51.856 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:51 vm10 bash[20034]: audit 2026-03-08T23:23:50.965322+0000 mgr.x (mgr.14150) 341 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:51.856 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:51 vm10 bash[20034]: audit 2026-03-08T23:23:50.965322+0000 mgr.x (mgr.14150) 341 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:51.856 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.856 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.856 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.857 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.929 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-08T23:23:51.947 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:51.949 INFO:teuthology.orchestra.run.vm10.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:51.960 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:51 vm04 bash[19918]: cluster 2026-03-08T23:23:50.237720+0000 mgr.x (mgr.14150) 340 : cluster [DBG] pgmap v283: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:51.960 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:51 vm04 bash[19918]: cluster 2026-03-08T23:23:50.237720+0000 mgr.x (mgr.14150) 340 : cluster [DBG] pgmap v283: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:51.960 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:51 vm04 bash[19918]: audit 2026-03-08T23:23:50.965322+0000 mgr.x (mgr.14150) 341 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:51.961 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:51 vm04 bash[19918]: audit 2026-03-08T23:23:50.965322+0000 mgr.x (mgr.14150) 341 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:51.961 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:51 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.961 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:51 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.961 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:51 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:51.961 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:51 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.017 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-08T23:23:52.044 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.047 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.049 INFO:teuthology.orchestra.run.vm04.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.051 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.054 INFO:teuthology.orchestra.run.vm04.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.056 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.059 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.061 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.096 INFO:teuthology.orchestra.run.vm04.stdout:Adding group ceph....done 2026-03-08T23:23:52.131 INFO:teuthology.orchestra.run.vm04.stdout:Adding system user ceph....done 2026-03-08T23:23:52.136 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-08T23:23:52.138 INFO:teuthology.orchestra.run.vm02.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.139 INFO:teuthology.orchestra.run.vm04.stdout:Setting system user ceph properties....done 2026-03-08T23:23:52.140 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.142 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-08T23:23:52.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:51 vm02 bash[17457]: cluster 2026-03-08T23:23:50.237720+0000 mgr.x (mgr.14150) 340 : cluster [DBG] pgmap v283: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:52.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:51 vm02 bash[17457]: cluster 2026-03-08T23:23:50.237720+0000 mgr.x (mgr.14150) 340 : cluster [DBG] pgmap v283: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:52.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:51 vm02 bash[17457]: audit 2026-03-08T23:23:50.965322+0000 mgr.x (mgr.14150) 341 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:52.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:51 vm02 bash[17457]: audit 2026-03-08T23:23:50.965322+0000 mgr.x (mgr.14150) 341 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:23:52.144 INFO:teuthology.orchestra.run.vm04.stdout:Fixing /var/run/ceph ownership....done 2026-03-08T23:23:52.148 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:51 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.148 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:51 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.148 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:51 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.148 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:51 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.158 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.158 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.158 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.158 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.158 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.158 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.158 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.158 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:51 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.158 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.208 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-08T23:23:52.208 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-08T23:23:52.463 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-08T23:23:52.559 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.560 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.560 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.560 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.560 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.585 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.586 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.586 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.586 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.639 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.723 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-08T23:23:52.796 INFO:teuthology.orchestra.run.vm02.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-08T23:23:52.802 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.804 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.806 INFO:teuthology.orchestra.run.vm02.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.808 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.810 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.817 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.817 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.817 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.817 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.817 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.866 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.866 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.866 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.866 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.866 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.866 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:52.868 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.871 INFO:teuthology.orchestra.run.vm04.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:52.874 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-08T23:23:52.874 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-08T23:23:53.102 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.102 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.102 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.102 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.102 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.102 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.102 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.102 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.102 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.102 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.102 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-08T23:23:53.103 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-08T23:23:53.111 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:53.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.124 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.124 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.124 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.144 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.144 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.144 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.177 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-08T23:23:53.178 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-08T23:23:53.293 INFO:teuthology.orchestra.run.vm02.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:53.295 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:53.299 INFO:teuthology.orchestra.run.vm02.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:53.301 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:53.303 INFO:teuthology.orchestra.run.vm02.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:53.305 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:53.307 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:53.309 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:53.342 INFO:teuthology.orchestra.run.vm02.stdout:Adding group ceph....done 2026-03-08T23:23:53.378 INFO:teuthology.orchestra.run.vm02.stdout:Adding system user ceph....done 2026-03-08T23:23:53.387 INFO:teuthology.orchestra.run.vm02.stdout:Setting system user ceph properties....done 2026-03-08T23:23:53.392 INFO:teuthology.orchestra.run.vm02.stdout:Fixing /var/run/ceph ownership....done 2026-03-08T23:23:53.395 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.396 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.396 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.396 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.396 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.407 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.407 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.407 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.549 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:53 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.549 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:53 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.549 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:53 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.549 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:53 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.549 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:53 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.549 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:53 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.549 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:53 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.549 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:53 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.552 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:53.638 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-08T23:23:53.642 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:53.701 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-08T23:23:53.701 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-08T23:23:53.702 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.702 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.702 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.702 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.702 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.702 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-08T23:23:53.741 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.741 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:53 vm02 bash[17457]: cluster 2026-03-08T23:23:52.238011+0000 mgr.x (mgr.14150) 342 : cluster [DBG] pgmap v284: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:53.741 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:53 vm02 bash[17457]: cluster 2026-03-08T23:23:52.238011+0000 mgr.x (mgr.14150) 342 : cluster [DBG] pgmap v284: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:53.741 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:53 vm02 bash[17457]: audit 2026-03-08T23:23:53.322433+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.110:0/471835241' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:53.741 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:53 vm02 bash[17457]: audit 2026-03-08T23:23:53.322433+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.110:0/471835241' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:53.741 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.741 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.742 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.742 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.841 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:53 vm04 bash[19918]: cluster 2026-03-08T23:23:52.238011+0000 mgr.x (mgr.14150) 342 : cluster [DBG] pgmap v284: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:53.841 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:53 vm04 bash[19918]: cluster 2026-03-08T23:23:52.238011+0000 mgr.x (mgr.14150) 342 : cluster [DBG] pgmap v284: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:53.841 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:53 vm04 bash[19918]: audit 2026-03-08T23:23:53.322433+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.110:0/471835241' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:53.841 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:53 vm04 bash[19918]: audit 2026-03-08T23:23:53.322433+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.110:0/471835241' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:53.841 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:53 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.841 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:53 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.841 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:53 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:53.841 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:53 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.016 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:53 vm10 bash[20034]: cluster 2026-03-08T23:23:52.238011+0000 mgr.x (mgr.14150) 342 : cluster [DBG] pgmap v284: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:54.016 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:53 vm10 bash[20034]: cluster 2026-03-08T23:23:52.238011+0000 mgr.x (mgr.14150) 342 : cluster [DBG] pgmap v284: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:54.016 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:53 vm10 bash[20034]: audit 2026-03-08T23:23:53.322433+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.110:0/471835241' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:54.016 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:53 vm10 bash[20034]: audit 2026-03-08T23:23:53.322433+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.110:0/471835241' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:54.016 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.016 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.016 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.016 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.016 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.051 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.051 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.051 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.051 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.051 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:53 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.062 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:54.105 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:54.122 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-08T23:23:54.122 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-08T23:23:54.127 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:53 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.127 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:53 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.128 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:53 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.128 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:53 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.134 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:54.139 INFO:teuthology.orchestra.run.vm02.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:54.183 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-08T23:23:54.183 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-08T23:23:54.301 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.301 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.301 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.301 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.301 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.301 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.301 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.301 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.309 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.309 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.309 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.309 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.309 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.413 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-08T23:23:54.413 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-08T23:23:54.429 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.429 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.429 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.429 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.518 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:54.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.586 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.586 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.586 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.586 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.586 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.586 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.589 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-08T23:23:54.589 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-08T23:23:54.605 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.605 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.605 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.605 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.605 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.605 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.605 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.605 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.605 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.605 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.617 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:54.619 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:54.641 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:54.705 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-08T23:23:54.705 INFO:teuthology.orchestra.run.vm10.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-08T23:23:54.728 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.728 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.728 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.729 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.788 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:54.879 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.879 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:54 vm02 bash[17457]: audit 2026-03-08T23:23:54.291273+0000 mon.a (mon.0) 743 : audit [DBG] from='client.? 192.168.123.104:0/1312618794' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:54.879 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:54 vm02 bash[17457]: audit 2026-03-08T23:23:54.291273+0000 mon.a (mon.0) 743 : audit [DBG] from='client.? 192.168.123.104:0/1312618794' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:54.879 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.879 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.879 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.879 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-08T23:23:54.879 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:54 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.880 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.880 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:54 vm10 bash[20034]: audit 2026-03-08T23:23:54.291273+0000 mon.a (mon.0) 743 : audit [DBG] from='client.? 192.168.123.104:0/1312618794' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:54.880 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:54 vm10 bash[20034]: audit 2026-03-08T23:23:54.291273+0000 mon.a (mon.0) 743 : audit [DBG] from='client.? 192.168.123.104:0/1312618794' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:54.880 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.881 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.881 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.881 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:54 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.985 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.985 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.985 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.985 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:54 vm04 bash[19918]: audit 2026-03-08T23:23:54.291273+0000 mon.a (mon.0) 743 : audit [DBG] from='client.? 192.168.123.104:0/1312618794' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:54.985 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:54 vm04 bash[19918]: audit 2026-03-08T23:23:54.291273+0000 mon.a (mon.0) 743 : audit [DBG] from='client.? 192.168.123.104:0/1312618794' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:54.985 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.985 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.985 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.985 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.985 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:54 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:54.987 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:55.071 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-08T23:23:55.071 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-08T23:23:55.107 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:55.119 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:55.121 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:55.132 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.133 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.133 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.133 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.133 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.134 INFO:teuthology.orchestra.run.vm10.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:55.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.144 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.144 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.144 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.249 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-08T23:23:55.252 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.252 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.253 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.253 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.258 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:23:55.272 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:23:55.342 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:55.350 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-08T23:23:55.405 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-08T23:23:55.405 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-08T23:23:55.464 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:55.466 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:55.480 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:55.541 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-08T23:23:55.542 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-08T23:23:55.542 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.542 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.542 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.542 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.556 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.556 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.556 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.556 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.556 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.841 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.842 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:55 vm02 bash[17457]: cluster 2026-03-08T23:23:54.238257+0000 mgr.x (mgr.14150) 343 : cluster [DBG] pgmap v285: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:55.842 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:55 vm02 bash[17457]: cluster 2026-03-08T23:23:54.238257+0000 mgr.x (mgr.14150) 343 : cluster [DBG] pgmap v285: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:55.842 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:55 vm02 bash[17457]: audit 2026-03-08T23:23:55.557709+0000 mon.a (mon.0) 744 : audit [DBG] from='client.? 192.168.123.102:0/44840139' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:55.842 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:55 vm02 bash[17457]: audit 2026-03-08T23:23:55.557709+0000 mon.a (mon.0) 744 : audit [DBG] from='client.? 192.168.123.102:0/44840139' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:55.842 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.842 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.842 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.842 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.842 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.842 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.842 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.842 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.842 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.844 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:55.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:55 vm04 bash[19918]: cluster 2026-03-08T23:23:54.238257+0000 mgr.x (mgr.14150) 343 : cluster [DBG] pgmap v285: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:55.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:55 vm04 bash[19918]: cluster 2026-03-08T23:23:54.238257+0000 mgr.x (mgr.14150) 343 : cluster [DBG] pgmap v285: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:55.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:55 vm04 bash[19918]: audit 2026-03-08T23:23:55.557709+0000 mon.a (mon.0) 744 : audit [DBG] from='client.? 192.168.123.102:0/44840139' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:55.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:55 vm04 bash[19918]: audit 2026-03-08T23:23:55.557709+0000 mon.a (mon.0) 744 : audit [DBG] from='client.? 192.168.123.102:0/44840139' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:55.866 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.867 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.867 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:55.896 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:23:55.896 INFO:teuthology.orchestra.run.vm10.stdout:Running kernel seems to be up-to-date. 2026-03-08T23:23:55.896 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:23:55.896 INFO:teuthology.orchestra.run.vm10.stdout:Services to be restarted: 2026-03-08T23:23:55.899 INFO:teuthology.orchestra.run.vm10.stdout: systemctl restart packagekit.service 2026-03-08T23:23:55.901 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:23:55.902 INFO:teuthology.orchestra.run.vm10.stdout:Service restarts being deferred: 2026-03-08T23:23:55.902 INFO:teuthology.orchestra.run.vm10.stdout: systemctl restart unattended-upgrades.service 2026-03-08T23:23:55.902 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:23:55.902 INFO:teuthology.orchestra.run.vm10.stdout:No containers need to be restarted. 2026-03-08T23:23:55.902 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:23:55.902 INFO:teuthology.orchestra.run.vm10.stdout:No user sessions are running outdated binaries. 2026-03-08T23:23:55.902 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:23:55.902 INFO:teuthology.orchestra.run.vm10.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-08T23:23:55.903 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-08T23:23:55.903 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-08T23:23:55.956 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:55.970 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:55.972 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:55.986 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:56.106 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.106 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.106 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.106 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.106 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.106 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-08T23:23:56.114 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:23:56.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.124 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.124 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.124 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.129 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:23:56.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:55 vm10 bash[20034]: cluster 2026-03-08T23:23:54.238257+0000 mgr.x (mgr.14150) 343 : cluster [DBG] pgmap v285: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:56.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:55 vm10 bash[20034]: cluster 2026-03-08T23:23:54.238257+0000 mgr.x (mgr.14150) 343 : cluster [DBG] pgmap v285: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:56.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:55 vm10 bash[20034]: audit 2026-03-08T23:23:55.557709+0000 mon.a (mon.0) 744 : audit [DBG] from='client.? 192.168.123.102:0/44840139' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:56.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:55 vm10 bash[20034]: audit 2026-03-08T23:23:55.557709+0000 mon.a (mon.0) 744 : audit [DBG] from='client.? 192.168.123.102:0/44840139' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-08T23:23:56.206 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-08T23:23:56.330 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:56.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.394 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.394 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.394 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.395 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.405 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-08T23:23:56.405 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-08T23:23:56.694 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-08T23:23:56.694 INFO:teuthology.orchestra.run.vm04.stdout:Running kernel seems to be up-to-date. 2026-03-08T23:23:56.694 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-08T23:23:56.694 INFO:teuthology.orchestra.run.vm04.stdout:Services to be restarted: 2026-03-08T23:23:56.696 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart packagekit.service 2026-03-08T23:23:56.699 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-08T23:23:56.699 INFO:teuthology.orchestra.run.vm04.stdout:Service restarts being deferred: 2026-03-08T23:23:56.699 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart unattended-upgrades.service 2026-03-08T23:23:56.699 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-08T23:23:56.699 INFO:teuthology.orchestra.run.vm04.stdout:No containers need to be restarted. 2026-03-08T23:23:56.699 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-08T23:23:56.699 INFO:teuthology.orchestra.run.vm04.stdout:No user sessions are running outdated binaries. 2026-03-08T23:23:56.699 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-08T23:23:56.699 INFO:teuthology.orchestra.run.vm04.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-08T23:23:56.771 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.771 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.771 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.771 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.771 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:56.778 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:23:56.781 DEBUG:teuthology.orchestra.run.vm10:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install open-iscsi multipath-tools python3-xmltodict python3-jmespath 2026-03-08T23:23:56.855 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:23:56.974 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:56.976 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:56.993 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:57.049 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:23:57.050 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:23:57.054 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-08T23:23:57.054 INFO:teuthology.orchestra.run.vm02.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-08T23:23:57.054 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:57.054 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:57.055 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:57.055 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:57.055 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:57.220 INFO:teuthology.orchestra.run.vm10.stdout:open-iscsi is already the newest version (2.1.5-1ubuntu1.1). 2026-03-08T23:23:57.220 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:23:57.222 INFO:teuthology.orchestra.run.vm10.stdout: libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:23:57.222 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:23:57.225 INFO:teuthology.orchestra.run.vm10.stdout:Suggested packages: 2026-03-08T23:23:57.225 INFO:teuthology.orchestra.run.vm10.stdout: multipath-tools-boot 2026-03-08T23:23:57.246 INFO:teuthology.orchestra.run.vm10.stdout:The following NEW packages will be installed: 2026-03-08T23:23:57.246 INFO:teuthology.orchestra.run.vm10.stdout: multipath-tools python3-jmespath python3-xmltodict 2026-03-08T23:23:57.335 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 3 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:23:57.335 INFO:teuthology.orchestra.run.vm10.stdout:Need to get 365 kB of archives. 2026-03-08T23:23:57.335 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 1399 kB of additional disk space will be used. 2026-03-08T23:23:57.335 INFO:teuthology.orchestra.run.vm10.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-08T23:23:57.352 INFO:teuthology.orchestra.run.vm10.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-08T23:23:57.354 INFO:teuthology.orchestra.run.vm10.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 multipath-tools amd64 0.8.8-1ubuntu1.22.04.4 [331 kB] 2026-03-08T23:23:57.364 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:57.364 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:57.364 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:57.364 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:57.364 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:57.460 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:57.473 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:57.476 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:57.488 INFO:teuthology.orchestra.run.vm02.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:23:57.586 INFO:teuthology.orchestra.run.vm10.stdout:Fetched 365 kB in 0s (2806 kB/s) 2026-03-08T23:23:57.590 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:23:57.593 DEBUG:teuthology.orchestra.run.vm04:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install open-iscsi multipath-tools python3-xmltodict python3-jmespath 2026-03-08T23:23:57.603 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-jmespath. 2026-03-08T23:23:57.609 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-08T23:23:57.616 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:23:57.630 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:23:57.634 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-08T23:23:57.636 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-08T23:23:57.637 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-08T23:23:57.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:57.644 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:23:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:57.644 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:23:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:57.644 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:23:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:57.644 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:23:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:57.653 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-08T23:23:57.660 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-08T23:23:57.660 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-08T23:23:57.668 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:23:57.676 INFO:teuthology.orchestra.run.vm10.stdout:Selecting previously unselected package multipath-tools. 2026-03-08T23:23:57.681 INFO:teuthology.orchestra.run.vm10.stdout:Preparing to unpack .../multipath-tools_0.8.8-1ubuntu1.22.04.4_amd64.deb ... 2026-03-08T23:23:57.685 INFO:teuthology.orchestra.run.vm10.stdout:Unpacking multipath-tools (0.8.8-1ubuntu1.22.04.4) ... 2026-03-08T23:23:57.704 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-08T23:23:57.725 INFO:teuthology.orchestra.run.vm10.stdout:Setting up multipath-tools (0.8.8-1ubuntu1.22.04.4) ... 2026-03-08T23:23:57.853 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:23:57.853 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:23:57.979 INFO:teuthology.orchestra.run.vm04.stdout:open-iscsi is already the newest version (2.1.5-1ubuntu1.1). 2026-03-08T23:23:57.979 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:23:57.979 INFO:teuthology.orchestra.run.vm04.stdout: libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:23:57.979 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:23:57.980 INFO:teuthology.orchestra.run.vm04.stdout:Suggested packages: 2026-03-08T23:23:57.980 INFO:teuthology.orchestra.run.vm04.stdout: multipath-tools-boot 2026-03-08T23:23:57.995 INFO:teuthology.orchestra.run.vm04.stdout:The following NEW packages will be installed: 2026-03-08T23:23:57.995 INFO:teuthology.orchestra.run.vm04.stdout: multipath-tools python3-jmespath python3-xmltodict 2026-03-08T23:23:58.083 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 3 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:23:58.083 INFO:teuthology.orchestra.run.vm04.stdout:Need to get 365 kB of archives. 2026-03-08T23:23:58.083 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 1399 kB of additional disk space will be used. 2026-03-08T23:23:58.083 INFO:teuthology.orchestra.run.vm04.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-08T23:23:58.099 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:58.099 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:58.100 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:58.100 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:58.100 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:58.101 INFO:teuthology.orchestra.run.vm04.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-08T23:23:58.103 INFO:teuthology.orchestra.run.vm04.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 multipath-tools amd64 0.8.8-1ubuntu1.22.04.4 [331 kB] 2026-03-08T23:23:58.192 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:23:58.192 INFO:teuthology.orchestra.run.vm02.stdout:Running kernel seems to be up-to-date. 2026-03-08T23:23:58.192 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:23:58.192 INFO:teuthology.orchestra.run.vm02.stdout:Services to be restarted: 2026-03-08T23:23:58.196 INFO:teuthology.orchestra.run.vm02.stdout: systemctl restart packagekit.service 2026-03-08T23:23:58.202 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:23:58.202 INFO:teuthology.orchestra.run.vm02.stdout:Service restarts being deferred: 2026-03-08T23:23:58.202 INFO:teuthology.orchestra.run.vm02.stdout: systemctl restart unattended-upgrades.service 2026-03-08T23:23:58.202 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:23:58.202 INFO:teuthology.orchestra.run.vm02.stdout:No containers need to be restarted. 2026-03-08T23:23:58.202 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:23:58.202 INFO:teuthology.orchestra.run.vm02.stdout:No user sessions are running outdated binaries. 2026-03-08T23:23:58.202 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:23:58.202 INFO:teuthology.orchestra.run.vm02.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-08T23:23:58.210 INFO:teuthology.orchestra.run.vm10.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-08T23:23:58.214 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-08T23:23:58.277 INFO:teuthology.orchestra.run.vm10.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-08T23:23:58.308 INFO:teuthology.orchestra.run.vm04.stdout:Fetched 365 kB in 0s (2773 kB/s) 2026-03-08T23:23:58.320 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jmespath. 2026-03-08T23:23:58.343 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:23:58.347 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-08T23:23:58.349 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-08T23:23:58.350 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-08T23:23:58.366 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-08T23:23:58.371 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-08T23:23:58.372 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-08T23:23:58.388 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package multipath-tools. 2026-03-08T23:23:58.394 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../multipath-tools_0.8.8-1ubuntu1.22.04.4_amd64.deb ... 2026-03-08T23:23:58.399 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking multipath-tools (0.8.8-1ubuntu1.22.04.4) ... 2026-03-08T23:23:58.399 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:23:58.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:58.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:58 vm10 bash[20034]: cluster 2026-03-08T23:23:56.238507+0000 mgr.x (mgr.14150) 344 : cluster [DBG] pgmap v286: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:58.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:58 vm10 bash[20034]: cluster 2026-03-08T23:23:56.238507+0000 mgr.x (mgr.14150) 344 : cluster [DBG] pgmap v286: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:58.407 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:23:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:58.407 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:23:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:58.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:23:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:58.407 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:23:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:58.439 INFO:teuthology.orchestra.run.vm04.stdout:Setting up multipath-tools (0.8.8-1ubuntu1.22.04.4) ... 2026-03-08T23:23:58.465 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:58 vm04 bash[19918]: cluster 2026-03-08T23:23:56.238507+0000 mgr.x (mgr.14150) 344 : cluster [DBG] pgmap v286: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:58.465 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:58 vm04 bash[19918]: cluster 2026-03-08T23:23:56.238507+0000 mgr.x (mgr.14150) 344 : cluster [DBG] pgmap v286: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:58.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:58 vm02 bash[17457]: cluster 2026-03-08T23:23:56.238507+0000 mgr.x (mgr.14150) 344 : cluster [DBG] pgmap v286: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:58.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:58 vm02 bash[17457]: cluster 2026-03-08T23:23:56.238507+0000 mgr.x (mgr.14150) 344 : cluster [DBG] pgmap v286: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:23:58.763 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:23:58.763 INFO:teuthology.orchestra.run.vm10.stdout:Running kernel seems to be up-to-date. 2026-03-08T23:23:58.763 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:23:58.763 INFO:teuthology.orchestra.run.vm10.stdout:Services to be restarted: 2026-03-08T23:23:58.766 INFO:teuthology.orchestra.run.vm10.stdout: systemctl restart packagekit.service 2026-03-08T23:23:58.768 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:23:58.768 INFO:teuthology.orchestra.run.vm10.stdout:Service restarts being deferred: 2026-03-08T23:23:58.769 INFO:teuthology.orchestra.run.vm10.stdout: systemctl restart unattended-upgrades.service 2026-03-08T23:23:58.769 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:23:58.769 INFO:teuthology.orchestra.run.vm10.stdout:No containers need to be restarted. 2026-03-08T23:23:58.769 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:23:58.769 INFO:teuthology.orchestra.run.vm10.stdout:No user sessions are running outdated binaries. 2026-03-08T23:23:58.769 INFO:teuthology.orchestra.run.vm10.stdout: 2026-03-08T23:23:58.769 INFO:teuthology.orchestra.run.vm10.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-08T23:23:58.812 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:58.812 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:58.812 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:58.812 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:58.912 INFO:teuthology.orchestra.run.vm04.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-08T23:23:58.915 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-08T23:23:58.978 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-08T23:23:59.047 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:23:59.102 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:23:59.123 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:23:59.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:59.124 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:23:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:59.124 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:23:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:59.124 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:23:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:23:59.126 DEBUG:teuthology.orchestra.run.vm02:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install open-iscsi multipath-tools python3-xmltodict python3-jmespath 2026-03-08T23:23:59.202 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:23:59.384 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:23:59.385 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:23:59.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:59 vm02 bash[17457]: cluster 2026-03-08T23:23:58.238764+0000 mgr.x (mgr.14150) 345 : cluster [DBG] pgmap v287: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:59.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:23:59 vm02 bash[17457]: cluster 2026-03-08T23:23:58.238764+0000 mgr.x (mgr.14150) 345 : cluster [DBG] pgmap v287: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:59.472 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-08T23:23:59.472 INFO:teuthology.orchestra.run.vm04.stdout:Running kernel seems to be up-to-date. 2026-03-08T23:23:59.472 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-08T23:23:59.472 INFO:teuthology.orchestra.run.vm04.stdout:Services to be restarted: 2026-03-08T23:23:59.475 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart packagekit.service 2026-03-08T23:23:59.478 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-08T23:23:59.478 INFO:teuthology.orchestra.run.vm04.stdout:Service restarts being deferred: 2026-03-08T23:23:59.478 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart unattended-upgrades.service 2026-03-08T23:23:59.479 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-08T23:23:59.479 INFO:teuthology.orchestra.run.vm04.stdout:No containers need to be restarted. 2026-03-08T23:23:59.479 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-08T23:23:59.479 INFO:teuthology.orchestra.run.vm04.stdout:No user sessions are running outdated binaries. 2026-03-08T23:23:59.479 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-08T23:23:59.479 INFO:teuthology.orchestra.run.vm04.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-08T23:23:59.510 INFO:teuthology.orchestra.run.vm02.stdout:open-iscsi is already the newest version (2.1.5-1ubuntu1.1). 2026-03-08T23:23:59.510 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:23:59.511 INFO:teuthology.orchestra.run.vm02.stdout: libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:23:59.511 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:23:59.512 INFO:teuthology.orchestra.run.vm02.stdout:Suggested packages: 2026-03-08T23:23:59.512 INFO:teuthology.orchestra.run.vm02.stdout: multipath-tools-boot 2026-03-08T23:23:59.524 INFO:teuthology.orchestra.run.vm02.stdout:The following NEW packages will be installed: 2026-03-08T23:23:59.524 INFO:teuthology.orchestra.run.vm02.stdout: multipath-tools python3-jmespath python3-xmltodict 2026-03-08T23:23:59.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:59 vm04 bash[19918]: cluster 2026-03-08T23:23:58.238764+0000 mgr.x (mgr.14150) 345 : cluster [DBG] pgmap v287: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:59.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:23:59 vm04 bash[19918]: cluster 2026-03-08T23:23:58.238764+0000 mgr.x (mgr.14150) 345 : cluster [DBG] pgmap v287: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:59.645 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:23:59.647 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:59 vm10 bash[20034]: cluster 2026-03-08T23:23:58.238764+0000 mgr.x (mgr.14150) 345 : cluster [DBG] pgmap v287: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:59.647 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:23:59 vm10 bash[20034]: cluster 2026-03-08T23:23:58.238764+0000 mgr.x (mgr.14150) 345 : cluster [DBG] pgmap v287: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:23:59.649 DEBUG:teuthology.parallel:result is None 2026-03-08T23:23:59.985 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 3 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:23:59.985 INFO:teuthology.orchestra.run.vm02.stdout:Need to get 365 kB of archives. 2026-03-08T23:23:59.985 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 1399 kB of additional disk space will be used. 2026-03-08T23:23:59.985 INFO:teuthology.orchestra.run.vm02.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-08T23:24:00.194 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:24:00.197 DEBUG:teuthology.parallel:result is None 2026-03-08T23:24:00.198 INFO:teuthology.orchestra.run.vm02.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-08T23:24:00.222 INFO:teuthology.orchestra.run.vm02.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 multipath-tools amd64 0.8.8-1ubuntu1.22.04.4 [331 kB] 2026-03-08T23:24:00.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:00 vm02 bash[17457]: audit 2026-03-08T23:24:00.027131+0000 mgr.x (mgr.14150) 346 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:00.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:00 vm02 bash[17457]: audit 2026-03-08T23:24:00.027131+0000 mgr.x (mgr.14150) 346 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:00.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:24:00 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:24:00.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:00 vm04 bash[19918]: audit 2026-03-08T23:24:00.027131+0000 mgr.x (mgr.14150) 346 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:00.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:00 vm04 bash[19918]: audit 2026-03-08T23:24:00.027131+0000 mgr.x (mgr.14150) 346 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:00.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:00 vm10 bash[20034]: audit 2026-03-08T23:24:00.027131+0000 mgr.x (mgr.14150) 346 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:00.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:00 vm10 bash[20034]: audit 2026-03-08T23:24:00.027131+0000 mgr.x (mgr.14150) 346 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:00.707 INFO:teuthology.orchestra.run.vm02.stdout:Fetched 365 kB in 1s (362 kB/s) 2026-03-08T23:24:00.718 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-jmespath. 2026-03-08T23:24:00.745 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-08T23:24:00.747 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-08T23:24:00.748 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-08T23:24:00.764 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-08T23:24:00.769 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-08T23:24:00.770 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-08T23:24:00.783 INFO:teuthology.orchestra.run.vm02.stdout:Selecting previously unselected package multipath-tools. 2026-03-08T23:24:00.788 INFO:teuthology.orchestra.run.vm02.stdout:Preparing to unpack .../multipath-tools_0.8.8-1ubuntu1.22.04.4_amd64.deb ... 2026-03-08T23:24:00.793 INFO:teuthology.orchestra.run.vm02.stdout:Unpacking multipath-tools (0.8.8-1ubuntu1.22.04.4) ... 2026-03-08T23:24:00.831 INFO:teuthology.orchestra.run.vm02.stdout:Setting up multipath-tools (0.8.8-1ubuntu1.22.04.4) ... 2026-03-08T23:24:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:24:01.144 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:24:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:24:01.144 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:24:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:24:01.144 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:24:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:24:01.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:24:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:24:01.247 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:24:00 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:24:01.304 INFO:teuthology.orchestra.run.vm02.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-08T23:24:01.309 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-08T23:24:01.377 INFO:teuthology.orchestra.run.vm02.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-08T23:24:01.447 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:24:01.531 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:24:01.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:01 vm04 bash[19918]: cluster 2026-03-08T23:24:00.238993+0000 mgr.x (mgr.14150) 347 : cluster [DBG] pgmap v288: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:01.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:01 vm04 bash[19918]: cluster 2026-03-08T23:24:00.238993+0000 mgr.x (mgr.14150) 347 : cluster [DBG] pgmap v288: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:01.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:01 vm04 bash[19918]: audit 2026-03-08T23:24:00.969209+0000 mgr.x (mgr.14150) 348 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:01.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:01 vm04 bash[19918]: audit 2026-03-08T23:24:00.969209+0000 mgr.x (mgr.14150) 348 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:01.644 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:24:01 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:24:01.644 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:24:01 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:24:01.644 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:24:01 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:24:01.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:01 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:24:01.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:01 vm02 bash[17457]: cluster 2026-03-08T23:24:00.238993+0000 mgr.x (mgr.14150) 347 : cluster [DBG] pgmap v288: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:01.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:01 vm02 bash[17457]: cluster 2026-03-08T23:24:00.238993+0000 mgr.x (mgr.14150) 347 : cluster [DBG] pgmap v288: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:01.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:01 vm02 bash[17457]: audit 2026-03-08T23:24:00.969209+0000 mgr.x (mgr.14150) 348 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:01.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:01 vm02 bash[17457]: audit 2026-03-08T23:24:00.969209+0000 mgr.x (mgr.14150) 348 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:01.644 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:24:01 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:24:01.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:01 vm10 bash[20034]: cluster 2026-03-08T23:24:00.238993+0000 mgr.x (mgr.14150) 347 : cluster [DBG] pgmap v288: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:01.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:01 vm10 bash[20034]: cluster 2026-03-08T23:24:00.238993+0000 mgr.x (mgr.14150) 347 : cluster [DBG] pgmap v288: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:01.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:01 vm10 bash[20034]: audit 2026-03-08T23:24:00.969209+0000 mgr.x (mgr.14150) 348 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:01.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:01 vm10 bash[20034]: audit 2026-03-08T23:24:00.969209+0000 mgr.x (mgr.14150) 348 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:02.019 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:24:02.019 INFO:teuthology.orchestra.run.vm02.stdout:Running kernel seems to be up-to-date. 2026-03-08T23:24:02.019 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:24:02.019 INFO:teuthology.orchestra.run.vm02.stdout:Services to be restarted: 2026-03-08T23:24:02.022 INFO:teuthology.orchestra.run.vm02.stdout: systemctl restart packagekit.service 2026-03-08T23:24:02.024 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:24:02.025 INFO:teuthology.orchestra.run.vm02.stdout:Service restarts being deferred: 2026-03-08T23:24:02.025 INFO:teuthology.orchestra.run.vm02.stdout: systemctl restart unattended-upgrades.service 2026-03-08T23:24:02.025 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:24:02.025 INFO:teuthology.orchestra.run.vm02.stdout:No containers need to be restarted. 2026-03-08T23:24:02.025 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:24:02.025 INFO:teuthology.orchestra.run.vm02.stdout:No user sessions are running outdated binaries. 2026-03-08T23:24:02.025 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-08T23:24:02.025 INFO:teuthology.orchestra.run.vm02.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-08T23:24:02.729 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:24:02.732 DEBUG:teuthology.parallel:result is None 2026-03-08T23:24:02.733 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T23:24:03.318 DEBUG:teuthology.orchestra.run.vm02:> dpkg-query -W -f '${Version}' ceph 2026-03-08T23:24:03.327 INFO:teuthology.orchestra.run.vm02.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-08T23:24:03.327 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-08T23:24:03.327 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-08T23:24:03.328 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T23:24:03.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:03 vm04 bash[19918]: cluster 2026-03-08T23:24:02.239215+0000 mgr.x (mgr.14150) 349 : cluster [DBG] pgmap v289: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:03.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:03 vm04 bash[19918]: cluster 2026-03-08T23:24:02.239215+0000 mgr.x (mgr.14150) 349 : cluster [DBG] pgmap v289: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:03.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:03 vm02 bash[17457]: cluster 2026-03-08T23:24:02.239215+0000 mgr.x (mgr.14150) 349 : cluster [DBG] pgmap v289: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:03.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:03 vm02 bash[17457]: cluster 2026-03-08T23:24:02.239215+0000 mgr.x (mgr.14150) 349 : cluster [DBG] pgmap v289: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:03.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:03 vm10 bash[20034]: cluster 2026-03-08T23:24:02.239215+0000 mgr.x (mgr.14150) 349 : cluster [DBG] pgmap v289: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:03.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:03 vm10 bash[20034]: cluster 2026-03-08T23:24:02.239215+0000 mgr.x (mgr.14150) 349 : cluster [DBG] pgmap v289: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:03.924 DEBUG:teuthology.orchestra.run.vm04:> dpkg-query -W -f '${Version}' ceph 2026-03-08T23:24:03.932 INFO:teuthology.orchestra.run.vm04.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-08T23:24:03.932 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-08T23:24:03.932 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-08T23:24:03.933 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T23:24:04.513 DEBUG:teuthology.orchestra.run.vm10:> dpkg-query -W -f '${Version}' ceph 2026-03-08T23:24:04.521 INFO:teuthology.orchestra.run.vm10.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-08T23:24:04.521 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-08T23:24:04.521 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-08T23:24:04.522 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-08T23:24:04.522 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-08T23:24:04.522 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-08T23:24:04.529 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-08T23:24:04.529 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-08T23:24:04.536 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-08T23:24:04.536 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-08T23:24:04.568 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-08T23:24:04.569 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-08T23:24:04.569 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/daemon-helper 2026-03-08T23:24:04.576 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-08T23:24:04.624 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-08T23:24:04.624 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/daemon-helper 2026-03-08T23:24:04.631 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-08T23:24:04.679 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-08T23:24:04.679 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/usr/bin/daemon-helper 2026-03-08T23:24:04.686 DEBUG:teuthology.orchestra.run.vm10:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-08T23:24:04.736 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-08T23:24:04.736 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-08T23:24:04.736 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-08T23:24:04.743 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-08T23:24:04.793 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-08T23:24:04.793 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-08T23:24:04.800 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-08T23:24:04.846 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-08T23:24:04.846 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-08T23:24:04.853 DEBUG:teuthology.orchestra.run.vm10:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-08T23:24:04.904 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-08T23:24:04.904 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-08T23:24:04.904 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/usr/bin/stdin-killer 2026-03-08T23:24:04.911 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-08T23:24:04.964 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-08T23:24:04.964 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/stdin-killer 2026-03-08T23:24:04.971 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-08T23:24:05.019 DEBUG:teuthology.orchestra.run.vm10:> set -ex 2026-03-08T23:24:05.019 DEBUG:teuthology.orchestra.run.vm10:> sudo dd of=/usr/bin/stdin-killer 2026-03-08T23:24:05.026 DEBUG:teuthology.orchestra.run.vm10:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-08T23:24:05.076 INFO:teuthology.run_tasks:Running task ceph_iscsi_client... 2026-03-08T23:24:05.079 INFO:tasks.ceph_iscsi_client:Setting up ceph-iscsi client... 2026-03-08T23:24:05.079 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-08T23:24:05.080 DEBUG:teuthology.orchestra.run.vm04:> sudo mkdir -p /etc/iscsi 2026-03-08T23:24:05.080 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/iscsi/initiatorname.iscsi 2026-03-08T23:24:05.091 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl restart iscsid 2026-03-08T23:24:05.149 DEBUG:teuthology.orchestra.run.vm04:> sudo modprobe dm_multipath 2026-03-08T23:24:05.200 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-08T23:24:05.200 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/multipath.conf 2026-03-08T23:24:05.247 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl start multipathd 2026-03-08T23:24:05.297 INFO:teuthology.run_tasks:Running task cram... 2026-03-08T23:24:05.300 INFO:tasks.cram:Pulling tests from https://github.com/kshtsk/ceph.git ref 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-08T23:24:05.301 DEBUG:teuthology.orchestra.run.vm02:> mkdir -- /home/ubuntu/cephtest/archive/cram.client.0 && python3 -m venv /home/ubuntu/cephtest/virtualenv && /home/ubuntu/cephtest/virtualenv/bin/pip install cram==0.6 2026-03-08T23:24:05.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:05 vm04 bash[19918]: cluster 2026-03-08T23:24:04.239471+0000 mgr.x (mgr.14150) 350 : cluster [DBG] pgmap v290: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:05.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:05 vm04 bash[19918]: cluster 2026-03-08T23:24:04.239471+0000 mgr.x (mgr.14150) 350 : cluster [DBG] pgmap v290: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:05.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:05 vm02 bash[17457]: cluster 2026-03-08T23:24:04.239471+0000 mgr.x (mgr.14150) 350 : cluster [DBG] pgmap v290: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:05.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:05 vm02 bash[17457]: cluster 2026-03-08T23:24:04.239471+0000 mgr.x (mgr.14150) 350 : cluster [DBG] pgmap v290: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:05.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:05 vm10 bash[20034]: cluster 2026-03-08T23:24:04.239471+0000 mgr.x (mgr.14150) 350 : cluster [DBG] pgmap v290: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:05.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:05 vm10 bash[20034]: cluster 2026-03-08T23:24:04.239471+0000 mgr.x (mgr.14150) 350 : cluster [DBG] pgmap v290: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:07.119 INFO:teuthology.orchestra.run.vm02.stdout:Collecting cram==0.6 2026-03-08T23:24:07.313 INFO:teuthology.orchestra.run.vm02.stdout: Downloading cram-0.6-py2.py3-none-any.whl (17 kB) 2026-03-08T23:24:07.331 INFO:teuthology.orchestra.run.vm02.stdout:Installing collected packages: cram 2026-03-08T23:24:07.340 INFO:teuthology.orchestra.run.vm02.stdout:Successfully installed cram-0.6 2026-03-08T23:24:07.376 DEBUG:teuthology.orchestra.run.vm02:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-08T23:24:07.379 INFO:teuthology.orchestra.run.vm02.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-08T23:24:07.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:07 vm04 bash[19918]: cluster 2026-03-08T23:24:06.239738+0000 mgr.x (mgr.14150) 351 : cluster [DBG] pgmap v291: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:07.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:07 vm04 bash[19918]: cluster 2026-03-08T23:24:06.239738+0000 mgr.x (mgr.14150) 351 : cluster [DBG] pgmap v291: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:07 vm02 bash[17457]: cluster 2026-03-08T23:24:06.239738+0000 mgr.x (mgr.14150) 351 : cluster [DBG] pgmap v291: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:07.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:07 vm02 bash[17457]: cluster 2026-03-08T23:24:06.239738+0000 mgr.x (mgr.14150) 351 : cluster [DBG] pgmap v291: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:07.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:07 vm10 bash[20034]: cluster 2026-03-08T23:24:06.239738+0000 mgr.x (mgr.14150) 351 : cluster [DBG] pgmap v291: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:07.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:07 vm10 bash[20034]: cluster 2026-03-08T23:24:06.239738+0000 mgr.x (mgr.14150) 351 : cluster [DBG] pgmap v291: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:09.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:09 vm04 bash[19918]: cluster 2026-03-08T23:24:08.239962+0000 mgr.x (mgr.14150) 352 : cluster [DBG] pgmap v292: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:09.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:09 vm04 bash[19918]: cluster 2026-03-08T23:24:08.239962+0000 mgr.x (mgr.14150) 352 : cluster [DBG] pgmap v292: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:09.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:09 vm02 bash[17457]: cluster 2026-03-08T23:24:08.239962+0000 mgr.x (mgr.14150) 352 : cluster [DBG] pgmap v292: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:09.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:09 vm02 bash[17457]: cluster 2026-03-08T23:24:08.239962+0000 mgr.x (mgr.14150) 352 : cluster [DBG] pgmap v292: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:09.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:09 vm10 bash[20034]: cluster 2026-03-08T23:24:08.239962+0000 mgr.x (mgr.14150) 352 : cluster [DBG] pgmap v292: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:09.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:09 vm10 bash[20034]: cluster 2026-03-08T23:24:08.239962+0000 mgr.x (mgr.14150) 352 : cluster [DBG] pgmap v292: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:10.287 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:24:10 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:24:10.287 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:10 vm02 bash[17457]: audit 2026-03-08T23:24:10.034948+0000 mgr.x (mgr.14150) 353 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:10.287 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:10 vm02 bash[17457]: audit 2026-03-08T23:24:10.034948+0000 mgr.x (mgr.14150) 353 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:10.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:10 vm04 bash[19918]: audit 2026-03-08T23:24:10.034948+0000 mgr.x (mgr.14150) 353 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:10.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:10 vm04 bash[19918]: audit 2026-03-08T23:24:10.034948+0000 mgr.x (mgr.14150) 353 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:10.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:10 vm10 bash[20034]: audit 2026-03-08T23:24:10.034948+0000 mgr.x (mgr.14150) 353 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:10.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:10 vm10 bash[20034]: audit 2026-03-08T23:24:10.034948+0000 mgr.x (mgr.14150) 353 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:11.290 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:24:10 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:24:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:11 vm04 bash[19918]: cluster 2026-03-08T23:24:10.240208+0000 mgr.x (mgr.14150) 354 : cluster [DBG] pgmap v293: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:11 vm04 bash[19918]: cluster 2026-03-08T23:24:10.240208+0000 mgr.x (mgr.14150) 354 : cluster [DBG] pgmap v293: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:11 vm04 bash[19918]: audit 2026-03-08T23:24:10.972407+0000 mgr.x (mgr.14150) 355 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:11 vm04 bash[19918]: audit 2026-03-08T23:24:10.972407+0000 mgr.x (mgr.14150) 355 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:11 vm02 bash[17457]: cluster 2026-03-08T23:24:10.240208+0000 mgr.x (mgr.14150) 354 : cluster [DBG] pgmap v293: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:11 vm02 bash[17457]: cluster 2026-03-08T23:24:10.240208+0000 mgr.x (mgr.14150) 354 : cluster [DBG] pgmap v293: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:11 vm02 bash[17457]: audit 2026-03-08T23:24:10.972407+0000 mgr.x (mgr.14150) 355 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:11 vm02 bash[17457]: audit 2026-03-08T23:24:10.972407+0000 mgr.x (mgr.14150) 355 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:11.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:11 vm10 bash[20034]: cluster 2026-03-08T23:24:10.240208+0000 mgr.x (mgr.14150) 354 : cluster [DBG] pgmap v293: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:11.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:11 vm10 bash[20034]: cluster 2026-03-08T23:24:10.240208+0000 mgr.x (mgr.14150) 354 : cluster [DBG] pgmap v293: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:11.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:11 vm10 bash[20034]: audit 2026-03-08T23:24:10.972407+0000 mgr.x (mgr.14150) 355 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:11.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:11 vm10 bash[20034]: audit 2026-03-08T23:24:10.972407+0000 mgr.x (mgr.14150) 355 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:13.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:13 vm04 bash[19918]: cluster 2026-03-08T23:24:12.240440+0000 mgr.x (mgr.14150) 356 : cluster [DBG] pgmap v294: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:13.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:13 vm04 bash[19918]: cluster 2026-03-08T23:24:12.240440+0000 mgr.x (mgr.14150) 356 : cluster [DBG] pgmap v294: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:13.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:13 vm02 bash[17457]: cluster 2026-03-08T23:24:12.240440+0000 mgr.x (mgr.14150) 356 : cluster [DBG] pgmap v294: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:13.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:13 vm02 bash[17457]: cluster 2026-03-08T23:24:12.240440+0000 mgr.x (mgr.14150) 356 : cluster [DBG] pgmap v294: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:13.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:13 vm10 bash[20034]: cluster 2026-03-08T23:24:12.240440+0000 mgr.x (mgr.14150) 356 : cluster [DBG] pgmap v294: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:13.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:13 vm10 bash[20034]: cluster 2026-03-08T23:24:12.240440+0000 mgr.x (mgr.14150) 356 : cluster [DBG] pgmap v294: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:15.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:15 vm04 bash[19918]: cluster 2026-03-08T23:24:14.240682+0000 mgr.x (mgr.14150) 357 : cluster [DBG] pgmap v295: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:15.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:15 vm04 bash[19918]: cluster 2026-03-08T23:24:14.240682+0000 mgr.x (mgr.14150) 357 : cluster [DBG] pgmap v295: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:15.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:15 vm02 bash[17457]: cluster 2026-03-08T23:24:14.240682+0000 mgr.x (mgr.14150) 357 : cluster [DBG] pgmap v295: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:15.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:15 vm02 bash[17457]: cluster 2026-03-08T23:24:14.240682+0000 mgr.x (mgr.14150) 357 : cluster [DBG] pgmap v295: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:15.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:15 vm10 bash[20034]: cluster 2026-03-08T23:24:14.240682+0000 mgr.x (mgr.14150) 357 : cluster [DBG] pgmap v295: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:15.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:15 vm10 bash[20034]: cluster 2026-03-08T23:24:14.240682+0000 mgr.x (mgr.14150) 357 : cluster [DBG] pgmap v295: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:16.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:16 vm04 bash[19918]: audit 2026-03-08T23:24:16.088694+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:24:16.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:16 vm04 bash[19918]: audit 2026-03-08T23:24:16.088694+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:24:16.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:16 vm02 bash[17457]: audit 2026-03-08T23:24:16.088694+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:24:16.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:16 vm02 bash[17457]: audit 2026-03-08T23:24:16.088694+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:24:16.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:16 vm10 bash[20034]: audit 2026-03-08T23:24:16.088694+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:24:16.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:16 vm10 bash[20034]: audit 2026-03-08T23:24:16.088694+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:24:17.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:17 vm04 bash[19918]: cluster 2026-03-08T23:24:16.240973+0000 mgr.x (mgr.14150) 358 : cluster [DBG] pgmap v296: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:17.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:17 vm04 bash[19918]: cluster 2026-03-08T23:24:16.240973+0000 mgr.x (mgr.14150) 358 : cluster [DBG] pgmap v296: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:17.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:17 vm04 bash[19918]: audit 2026-03-08T23:24:16.491461+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:24:17.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:17 vm04 bash[19918]: audit 2026-03-08T23:24:16.491461+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:24:17.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:17 vm04 bash[19918]: audit 2026-03-08T23:24:16.496674+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:24:17.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:17 vm04 bash[19918]: audit 2026-03-08T23:24:16.496674+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:24:17.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:17 vm04 bash[19918]: audit 2026-03-08T23:24:16.561671+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:24:17.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:17 vm04 bash[19918]: audit 2026-03-08T23:24:16.561671+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:24:17.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:17 vm02 bash[17457]: cluster 2026-03-08T23:24:16.240973+0000 mgr.x (mgr.14150) 358 : cluster [DBG] pgmap v296: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:17.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:17 vm02 bash[17457]: cluster 2026-03-08T23:24:16.240973+0000 mgr.x (mgr.14150) 358 : cluster [DBG] pgmap v296: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:17.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:17 vm02 bash[17457]: audit 2026-03-08T23:24:16.491461+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:24:17.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:17 vm02 bash[17457]: audit 2026-03-08T23:24:16.491461+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:24:17.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:17 vm02 bash[17457]: audit 2026-03-08T23:24:16.496674+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:24:17.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:17 vm02 bash[17457]: audit 2026-03-08T23:24:16.496674+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:24:17.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:17 vm02 bash[17457]: audit 2026-03-08T23:24:16.561671+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:24:17.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:17 vm02 bash[17457]: audit 2026-03-08T23:24:16.561671+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:24:17.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:17 vm10 bash[20034]: cluster 2026-03-08T23:24:16.240973+0000 mgr.x (mgr.14150) 358 : cluster [DBG] pgmap v296: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:17.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:17 vm10 bash[20034]: cluster 2026-03-08T23:24:16.240973+0000 mgr.x (mgr.14150) 358 : cluster [DBG] pgmap v296: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:17.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:17 vm10 bash[20034]: audit 2026-03-08T23:24:16.491461+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:24:17.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:17 vm10 bash[20034]: audit 2026-03-08T23:24:16.491461+0000 mon.a (mon.0) 746 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:24:17.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:17 vm10 bash[20034]: audit 2026-03-08T23:24:16.496674+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:24:17.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:17 vm10 bash[20034]: audit 2026-03-08T23:24:16.496674+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:24:17.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:17 vm10 bash[20034]: audit 2026-03-08T23:24:16.561671+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:24:17.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:17 vm10 bash[20034]: audit 2026-03-08T23:24:16.561671+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:24:19.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:19 vm04 bash[19918]: cluster 2026-03-08T23:24:18.241248+0000 mgr.x (mgr.14150) 359 : cluster [DBG] pgmap v297: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:19.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:19 vm04 bash[19918]: cluster 2026-03-08T23:24:18.241248+0000 mgr.x (mgr.14150) 359 : cluster [DBG] pgmap v297: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:19.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:19 vm02 bash[17457]: cluster 2026-03-08T23:24:18.241248+0000 mgr.x (mgr.14150) 359 : cluster [DBG] pgmap v297: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:19.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:19 vm02 bash[17457]: cluster 2026-03-08T23:24:18.241248+0000 mgr.x (mgr.14150) 359 : cluster [DBG] pgmap v297: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:19.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:19 vm10 bash[20034]: cluster 2026-03-08T23:24:18.241248+0000 mgr.x (mgr.14150) 359 : cluster [DBG] pgmap v297: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:19.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:19 vm10 bash[20034]: cluster 2026-03-08T23:24:18.241248+0000 mgr.x (mgr.14150) 359 : cluster [DBG] pgmap v297: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:20.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:24:20 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:24:20.975 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:20 vm10 bash[20034]: audit 2026-03-08T23:24:20.042951+0000 mgr.x (mgr.14150) 360 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:20.975 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:20 vm10 bash[20034]: audit 2026-03-08T23:24:20.042951+0000 mgr.x (mgr.14150) 360 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:21.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:20 vm04 bash[19918]: audit 2026-03-08T23:24:20.042951+0000 mgr.x (mgr.14150) 360 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:21.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:20 vm04 bash[19918]: audit 2026-03-08T23:24:20.042951+0000 mgr.x (mgr.14150) 360 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:21.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:20 vm02 bash[17457]: audit 2026-03-08T23:24:20.042951+0000 mgr.x (mgr.14150) 360 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:21.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:20 vm02 bash[17457]: audit 2026-03-08T23:24:20.042951+0000 mgr.x (mgr.14150) 360 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:21.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:24:20 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:24:22.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:21 vm04 bash[19918]: cluster 2026-03-08T23:24:20.241526+0000 mgr.x (mgr.14150) 361 : cluster [DBG] pgmap v298: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:22.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:21 vm04 bash[19918]: cluster 2026-03-08T23:24:20.241526+0000 mgr.x (mgr.14150) 361 : cluster [DBG] pgmap v298: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:22.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:21 vm04 bash[19918]: audit 2026-03-08T23:24:20.975850+0000 mgr.x (mgr.14150) 362 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:22.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:21 vm04 bash[19918]: audit 2026-03-08T23:24:20.975850+0000 mgr.x (mgr.14150) 362 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:22.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:21 vm02 bash[17457]: cluster 2026-03-08T23:24:20.241526+0000 mgr.x (mgr.14150) 361 : cluster [DBG] pgmap v298: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:22.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:21 vm02 bash[17457]: cluster 2026-03-08T23:24:20.241526+0000 mgr.x (mgr.14150) 361 : cluster [DBG] pgmap v298: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:22.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:21 vm02 bash[17457]: audit 2026-03-08T23:24:20.975850+0000 mgr.x (mgr.14150) 362 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:22.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:21 vm02 bash[17457]: audit 2026-03-08T23:24:20.975850+0000 mgr.x (mgr.14150) 362 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:22.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:21 vm10 bash[20034]: cluster 2026-03-08T23:24:20.241526+0000 mgr.x (mgr.14150) 361 : cluster [DBG] pgmap v298: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:22.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:21 vm10 bash[20034]: cluster 2026-03-08T23:24:20.241526+0000 mgr.x (mgr.14150) 361 : cluster [DBG] pgmap v298: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:22.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:21 vm10 bash[20034]: audit 2026-03-08T23:24:20.975850+0000 mgr.x (mgr.14150) 362 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:22.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:21 vm10 bash[20034]: audit 2026-03-08T23:24:20.975850+0000 mgr.x (mgr.14150) 362 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:24.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:23 vm04 bash[19918]: cluster 2026-03-08T23:24:22.241786+0000 mgr.x (mgr.14150) 363 : cluster [DBG] pgmap v299: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:24.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:23 vm04 bash[19918]: cluster 2026-03-08T23:24:22.241786+0000 mgr.x (mgr.14150) 363 : cluster [DBG] pgmap v299: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:24.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:23 vm02 bash[17457]: cluster 2026-03-08T23:24:22.241786+0000 mgr.x (mgr.14150) 363 : cluster [DBG] pgmap v299: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:24.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:23 vm02 bash[17457]: cluster 2026-03-08T23:24:22.241786+0000 mgr.x (mgr.14150) 363 : cluster [DBG] pgmap v299: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:24.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:23 vm10 bash[20034]: cluster 2026-03-08T23:24:22.241786+0000 mgr.x (mgr.14150) 363 : cluster [DBG] pgmap v299: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:24.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:23 vm10 bash[20034]: cluster 2026-03-08T23:24:22.241786+0000 mgr.x (mgr.14150) 363 : cluster [DBG] pgmap v299: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:26.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:25 vm04 bash[19918]: cluster 2026-03-08T23:24:24.242040+0000 mgr.x (mgr.14150) 364 : cluster [DBG] pgmap v300: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:26.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:25 vm04 bash[19918]: cluster 2026-03-08T23:24:24.242040+0000 mgr.x (mgr.14150) 364 : cluster [DBG] pgmap v300: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:26.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:25 vm02 bash[17457]: cluster 2026-03-08T23:24:24.242040+0000 mgr.x (mgr.14150) 364 : cluster [DBG] pgmap v300: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:26.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:25 vm02 bash[17457]: cluster 2026-03-08T23:24:24.242040+0000 mgr.x (mgr.14150) 364 : cluster [DBG] pgmap v300: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:26.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:25 vm10 bash[20034]: cluster 2026-03-08T23:24:24.242040+0000 mgr.x (mgr.14150) 364 : cluster [DBG] pgmap v300: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:26.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:25 vm10 bash[20034]: cluster 2026-03-08T23:24:24.242040+0000 mgr.x (mgr.14150) 364 : cluster [DBG] pgmap v300: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:28.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:27 vm04 bash[19918]: cluster 2026-03-08T23:24:26.242333+0000 mgr.x (mgr.14150) 365 : cluster [DBG] pgmap v301: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:28.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:27 vm04 bash[19918]: cluster 2026-03-08T23:24:26.242333+0000 mgr.x (mgr.14150) 365 : cluster [DBG] pgmap v301: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:28.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:27 vm02 bash[17457]: cluster 2026-03-08T23:24:26.242333+0000 mgr.x (mgr.14150) 365 : cluster [DBG] pgmap v301: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:28.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:27 vm02 bash[17457]: cluster 2026-03-08T23:24:26.242333+0000 mgr.x (mgr.14150) 365 : cluster [DBG] pgmap v301: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:28.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:27 vm10 bash[20034]: cluster 2026-03-08T23:24:26.242333+0000 mgr.x (mgr.14150) 365 : cluster [DBG] pgmap v301: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:28.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:27 vm10 bash[20034]: cluster 2026-03-08T23:24:26.242333+0000 mgr.x (mgr.14150) 365 : cluster [DBG] pgmap v301: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:30.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:30 vm02 bash[17457]: cluster 2026-03-08T23:24:28.242645+0000 mgr.x (mgr.14150) 366 : cluster [DBG] pgmap v302: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:30.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:30 vm02 bash[17457]: cluster 2026-03-08T23:24:28.242645+0000 mgr.x (mgr.14150) 366 : cluster [DBG] pgmap v302: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:30.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:24:30 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:24:30.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:30 vm04 bash[19918]: cluster 2026-03-08T23:24:28.242645+0000 mgr.x (mgr.14150) 366 : cluster [DBG] pgmap v302: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:30.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:30 vm04 bash[19918]: cluster 2026-03-08T23:24:28.242645+0000 mgr.x (mgr.14150) 366 : cluster [DBG] pgmap v302: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:30.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:30 vm10 bash[20034]: cluster 2026-03-08T23:24:28.242645+0000 mgr.x (mgr.14150) 366 : cluster [DBG] pgmap v302: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:30.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:30 vm10 bash[20034]: cluster 2026-03-08T23:24:28.242645+0000 mgr.x (mgr.14150) 366 : cluster [DBG] pgmap v302: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:31.251 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:24:30 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:24:31.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:31 vm04 bash[19918]: audit 2026-03-08T23:24:30.050960+0000 mgr.x (mgr.14150) 367 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:31.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:31 vm04 bash[19918]: audit 2026-03-08T23:24:30.050960+0000 mgr.x (mgr.14150) 367 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:31.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:31 vm04 bash[19918]: cluster 2026-03-08T23:24:30.242896+0000 mgr.x (mgr.14150) 368 : cluster [DBG] pgmap v303: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:31.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:31 vm04 bash[19918]: cluster 2026-03-08T23:24:30.242896+0000 mgr.x (mgr.14150) 368 : cluster [DBG] pgmap v303: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:31.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:31 vm04 bash[19918]: audit 2026-03-08T23:24:30.979163+0000 mgr.x (mgr.14150) 369 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:31.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:31 vm04 bash[19918]: audit 2026-03-08T23:24:30.979163+0000 mgr.x (mgr.14150) 369 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:31.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:31 vm02 bash[17457]: audit 2026-03-08T23:24:30.050960+0000 mgr.x (mgr.14150) 367 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:31.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:31 vm02 bash[17457]: audit 2026-03-08T23:24:30.050960+0000 mgr.x (mgr.14150) 367 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:31.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:31 vm02 bash[17457]: cluster 2026-03-08T23:24:30.242896+0000 mgr.x (mgr.14150) 368 : cluster [DBG] pgmap v303: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:31.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:31 vm02 bash[17457]: cluster 2026-03-08T23:24:30.242896+0000 mgr.x (mgr.14150) 368 : cluster [DBG] pgmap v303: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:31.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:31 vm02 bash[17457]: audit 2026-03-08T23:24:30.979163+0000 mgr.x (mgr.14150) 369 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:31.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:31 vm02 bash[17457]: audit 2026-03-08T23:24:30.979163+0000 mgr.x (mgr.14150) 369 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:31.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:31 vm10 bash[20034]: audit 2026-03-08T23:24:30.050960+0000 mgr.x (mgr.14150) 367 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:31.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:31 vm10 bash[20034]: audit 2026-03-08T23:24:30.050960+0000 mgr.x (mgr.14150) 367 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:31.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:31 vm10 bash[20034]: cluster 2026-03-08T23:24:30.242896+0000 mgr.x (mgr.14150) 368 : cluster [DBG] pgmap v303: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:31.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:31 vm10 bash[20034]: cluster 2026-03-08T23:24:30.242896+0000 mgr.x (mgr.14150) 368 : cluster [DBG] pgmap v303: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:31.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:31 vm10 bash[20034]: audit 2026-03-08T23:24:30.979163+0000 mgr.x (mgr.14150) 369 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:31.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:31 vm10 bash[20034]: audit 2026-03-08T23:24:30.979163+0000 mgr.x (mgr.14150) 369 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:33.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:33 vm04 bash[19918]: cluster 2026-03-08T23:24:32.243135+0000 mgr.x (mgr.14150) 370 : cluster [DBG] pgmap v304: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:33.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:33 vm04 bash[19918]: cluster 2026-03-08T23:24:32.243135+0000 mgr.x (mgr.14150) 370 : cluster [DBG] pgmap v304: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:33 vm02 bash[17457]: cluster 2026-03-08T23:24:32.243135+0000 mgr.x (mgr.14150) 370 : cluster [DBG] pgmap v304: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:33 vm02 bash[17457]: cluster 2026-03-08T23:24:32.243135+0000 mgr.x (mgr.14150) 370 : cluster [DBG] pgmap v304: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:33.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:33 vm10 bash[20034]: cluster 2026-03-08T23:24:32.243135+0000 mgr.x (mgr.14150) 370 : cluster [DBG] pgmap v304: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:33.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:33 vm10 bash[20034]: cluster 2026-03-08T23:24:32.243135+0000 mgr.x (mgr.14150) 370 : cluster [DBG] pgmap v304: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:35.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:35 vm04 bash[19918]: cluster 2026-03-08T23:24:34.243397+0000 mgr.x (mgr.14150) 371 : cluster [DBG] pgmap v305: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:35.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:35 vm04 bash[19918]: cluster 2026-03-08T23:24:34.243397+0000 mgr.x (mgr.14150) 371 : cluster [DBG] pgmap v305: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:35 vm02 bash[17457]: cluster 2026-03-08T23:24:34.243397+0000 mgr.x (mgr.14150) 371 : cluster [DBG] pgmap v305: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:35 vm02 bash[17457]: cluster 2026-03-08T23:24:34.243397+0000 mgr.x (mgr.14150) 371 : cluster [DBG] pgmap v305: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:35 vm10 bash[20034]: cluster 2026-03-08T23:24:34.243397+0000 mgr.x (mgr.14150) 371 : cluster [DBG] pgmap v305: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:35 vm10 bash[20034]: cluster 2026-03-08T23:24:34.243397+0000 mgr.x (mgr.14150) 371 : cluster [DBG] pgmap v305: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:37.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:37 vm04 bash[19918]: cluster 2026-03-08T23:24:36.243670+0000 mgr.x (mgr.14150) 372 : cluster [DBG] pgmap v306: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:37.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:37 vm04 bash[19918]: cluster 2026-03-08T23:24:36.243670+0000 mgr.x (mgr.14150) 372 : cluster [DBG] pgmap v306: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:37 vm02 bash[17457]: cluster 2026-03-08T23:24:36.243670+0000 mgr.x (mgr.14150) 372 : cluster [DBG] pgmap v306: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:37 vm02 bash[17457]: cluster 2026-03-08T23:24:36.243670+0000 mgr.x (mgr.14150) 372 : cluster [DBG] pgmap v306: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:37 vm10 bash[20034]: cluster 2026-03-08T23:24:36.243670+0000 mgr.x (mgr.14150) 372 : cluster [DBG] pgmap v306: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:37 vm10 bash[20034]: cluster 2026-03-08T23:24:36.243670+0000 mgr.x (mgr.14150) 372 : cluster [DBG] pgmap v306: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:39.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:39 vm04 bash[19918]: cluster 2026-03-08T23:24:38.243952+0000 mgr.x (mgr.14150) 373 : cluster [DBG] pgmap v307: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:39.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:39 vm04 bash[19918]: cluster 2026-03-08T23:24:38.243952+0000 mgr.x (mgr.14150) 373 : cluster [DBG] pgmap v307: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:39 vm02 bash[17457]: cluster 2026-03-08T23:24:38.243952+0000 mgr.x (mgr.14150) 373 : cluster [DBG] pgmap v307: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:39 vm02 bash[17457]: cluster 2026-03-08T23:24:38.243952+0000 mgr.x (mgr.14150) 373 : cluster [DBG] pgmap v307: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:39 vm10 bash[20034]: cluster 2026-03-08T23:24:38.243952+0000 mgr.x (mgr.14150) 373 : cluster [DBG] pgmap v307: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:39 vm10 bash[20034]: cluster 2026-03-08T23:24:38.243952+0000 mgr.x (mgr.14150) 373 : cluster [DBG] pgmap v307: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:40.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:24:40 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:24:40.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:40 vm04 bash[19918]: audit 2026-03-08T23:24:40.058940+0000 mgr.x (mgr.14150) 374 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:40.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:40 vm04 bash[19918]: audit 2026-03-08T23:24:40.058940+0000 mgr.x (mgr.14150) 374 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:40 vm02 bash[17457]: audit 2026-03-08T23:24:40.058940+0000 mgr.x (mgr.14150) 374 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:40 vm02 bash[17457]: audit 2026-03-08T23:24:40.058940+0000 mgr.x (mgr.14150) 374 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:40.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:40 vm10 bash[20034]: audit 2026-03-08T23:24:40.058940+0000 mgr.x (mgr.14150) 374 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:40.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:40 vm10 bash[20034]: audit 2026-03-08T23:24:40.058940+0000 mgr.x (mgr.14150) 374 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:41.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:24:40 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:24:41.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:41 vm04 bash[19918]: cluster 2026-03-08T23:24:40.244210+0000 mgr.x (mgr.14150) 375 : cluster [DBG] pgmap v308: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:41.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:41 vm04 bash[19918]: cluster 2026-03-08T23:24:40.244210+0000 mgr.x (mgr.14150) 375 : cluster [DBG] pgmap v308: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:41.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:41 vm04 bash[19918]: audit 2026-03-08T23:24:40.986754+0000 mgr.x (mgr.14150) 376 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:41.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:41 vm04 bash[19918]: audit 2026-03-08T23:24:40.986754+0000 mgr.x (mgr.14150) 376 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:41 vm02 bash[17457]: cluster 2026-03-08T23:24:40.244210+0000 mgr.x (mgr.14150) 375 : cluster [DBG] pgmap v308: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:41 vm02 bash[17457]: cluster 2026-03-08T23:24:40.244210+0000 mgr.x (mgr.14150) 375 : cluster [DBG] pgmap v308: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:41 vm02 bash[17457]: audit 2026-03-08T23:24:40.986754+0000 mgr.x (mgr.14150) 376 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:41 vm02 bash[17457]: audit 2026-03-08T23:24:40.986754+0000 mgr.x (mgr.14150) 376 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:41 vm10 bash[20034]: cluster 2026-03-08T23:24:40.244210+0000 mgr.x (mgr.14150) 375 : cluster [DBG] pgmap v308: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:41 vm10 bash[20034]: cluster 2026-03-08T23:24:40.244210+0000 mgr.x (mgr.14150) 375 : cluster [DBG] pgmap v308: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:41 vm10 bash[20034]: audit 2026-03-08T23:24:40.986754+0000 mgr.x (mgr.14150) 376 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:41 vm10 bash[20034]: audit 2026-03-08T23:24:40.986754+0000 mgr.x (mgr.14150) 376 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:43.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:43 vm04 bash[19918]: cluster 2026-03-08T23:24:42.244473+0000 mgr.x (mgr.14150) 377 : cluster [DBG] pgmap v309: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:43.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:43 vm04 bash[19918]: cluster 2026-03-08T23:24:42.244473+0000 mgr.x (mgr.14150) 377 : cluster [DBG] pgmap v309: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:43.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:43 vm02 bash[17457]: cluster 2026-03-08T23:24:42.244473+0000 mgr.x (mgr.14150) 377 : cluster [DBG] pgmap v309: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:43.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:43 vm02 bash[17457]: cluster 2026-03-08T23:24:42.244473+0000 mgr.x (mgr.14150) 377 : cluster [DBG] pgmap v309: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:43 vm10 bash[20034]: cluster 2026-03-08T23:24:42.244473+0000 mgr.x (mgr.14150) 377 : cluster [DBG] pgmap v309: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:43 vm10 bash[20034]: cluster 2026-03-08T23:24:42.244473+0000 mgr.x (mgr.14150) 377 : cluster [DBG] pgmap v309: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:45.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:45 vm04 bash[19918]: cluster 2026-03-08T23:24:44.244704+0000 mgr.x (mgr.14150) 378 : cluster [DBG] pgmap v310: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:45.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:45 vm04 bash[19918]: cluster 2026-03-08T23:24:44.244704+0000 mgr.x (mgr.14150) 378 : cluster [DBG] pgmap v310: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:45 vm02 bash[17457]: cluster 2026-03-08T23:24:44.244704+0000 mgr.x (mgr.14150) 378 : cluster [DBG] pgmap v310: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:45 vm02 bash[17457]: cluster 2026-03-08T23:24:44.244704+0000 mgr.x (mgr.14150) 378 : cluster [DBG] pgmap v310: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:45 vm10 bash[20034]: cluster 2026-03-08T23:24:44.244704+0000 mgr.x (mgr.14150) 378 : cluster [DBG] pgmap v310: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:45 vm10 bash[20034]: cluster 2026-03-08T23:24:44.244704+0000 mgr.x (mgr.14150) 378 : cluster [DBG] pgmap v310: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:47.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:47 vm04 bash[19918]: cluster 2026-03-08T23:24:46.244971+0000 mgr.x (mgr.14150) 379 : cluster [DBG] pgmap v311: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:47.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:47 vm04 bash[19918]: cluster 2026-03-08T23:24:46.244971+0000 mgr.x (mgr.14150) 379 : cluster [DBG] pgmap v311: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:47 vm02 bash[17457]: cluster 2026-03-08T23:24:46.244971+0000 mgr.x (mgr.14150) 379 : cluster [DBG] pgmap v311: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:47 vm02 bash[17457]: cluster 2026-03-08T23:24:46.244971+0000 mgr.x (mgr.14150) 379 : cluster [DBG] pgmap v311: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:47 vm10 bash[20034]: cluster 2026-03-08T23:24:46.244971+0000 mgr.x (mgr.14150) 379 : cluster [DBG] pgmap v311: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:47 vm10 bash[20034]: cluster 2026-03-08T23:24:46.244971+0000 mgr.x (mgr.14150) 379 : cluster [DBG] pgmap v311: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:49.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:49 vm04 bash[19918]: cluster 2026-03-08T23:24:48.245277+0000 mgr.x (mgr.14150) 380 : cluster [DBG] pgmap v312: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:49.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:49 vm04 bash[19918]: cluster 2026-03-08T23:24:48.245277+0000 mgr.x (mgr.14150) 380 : cluster [DBG] pgmap v312: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:49 vm02 bash[17457]: cluster 2026-03-08T23:24:48.245277+0000 mgr.x (mgr.14150) 380 : cluster [DBG] pgmap v312: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:49.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:49 vm02 bash[17457]: cluster 2026-03-08T23:24:48.245277+0000 mgr.x (mgr.14150) 380 : cluster [DBG] pgmap v312: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:49 vm10 bash[20034]: cluster 2026-03-08T23:24:48.245277+0000 mgr.x (mgr.14150) 380 : cluster [DBG] pgmap v312: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:49.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:49 vm10 bash[20034]: cluster 2026-03-08T23:24:48.245277+0000 mgr.x (mgr.14150) 380 : cluster [DBG] pgmap v312: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:50.314 INFO:teuthology.orchestra.run.vm02.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr:state without impacting any branches by switching back to a branch. 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr: git switch -c 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr:Or undo this operation with: 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr: git switch - 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-08T23:24:50.315 INFO:teuthology.orchestra.run.vm02.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-08T23:24:50.321 DEBUG:teuthology.orchestra.run.vm02:> cp -- /home/ubuntu/cephtest/clone.client.0/src/test/cli-integration/rbd/gwcli_create.t /home/ubuntu/cephtest/archive/cram.client.0 2026-03-08T23:24:50.368 DEBUG:teuthology.orchestra.run.vm04:> mkdir -- /home/ubuntu/cephtest/archive/cram.client.1 && python3 -m venv /home/ubuntu/cephtest/virtualenv && /home/ubuntu/cephtest/virtualenv/bin/pip install cram==0.6 2026-03-08T23:24:50.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:24:50 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:24:50.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:50 vm04 bash[19918]: audit 2026-03-08T23:24:50.066965+0000 mgr.x (mgr.14150) 381 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:50.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:50 vm04 bash[19918]: audit 2026-03-08T23:24:50.066965+0000 mgr.x (mgr.14150) 381 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:50.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:50 vm02 bash[17457]: audit 2026-03-08T23:24:50.066965+0000 mgr.x (mgr.14150) 381 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:50.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:50 vm02 bash[17457]: audit 2026-03-08T23:24:50.066965+0000 mgr.x (mgr.14150) 381 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:50.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:50 vm10 bash[20034]: audit 2026-03-08T23:24:50.066965+0000 mgr.x (mgr.14150) 381 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:50.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:50 vm10 bash[20034]: audit 2026-03-08T23:24:50.066965+0000 mgr.x (mgr.14150) 381 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:51.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:24:50 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:24:51.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:51 vm04 bash[19918]: cluster 2026-03-08T23:24:50.245594+0000 mgr.x (mgr.14150) 382 : cluster [DBG] pgmap v313: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:51.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:51 vm04 bash[19918]: cluster 2026-03-08T23:24:50.245594+0000 mgr.x (mgr.14150) 382 : cluster [DBG] pgmap v313: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:51.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:51 vm04 bash[19918]: audit 2026-03-08T23:24:50.997437+0000 mgr.x (mgr.14150) 383 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:51.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:51 vm04 bash[19918]: audit 2026-03-08T23:24:50.997437+0000 mgr.x (mgr.14150) 383 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:51.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:51 vm02 bash[17457]: cluster 2026-03-08T23:24:50.245594+0000 mgr.x (mgr.14150) 382 : cluster [DBG] pgmap v313: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:51.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:51 vm02 bash[17457]: cluster 2026-03-08T23:24:50.245594+0000 mgr.x (mgr.14150) 382 : cluster [DBG] pgmap v313: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:51.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:51 vm02 bash[17457]: audit 2026-03-08T23:24:50.997437+0000 mgr.x (mgr.14150) 383 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:51.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:51 vm02 bash[17457]: audit 2026-03-08T23:24:50.997437+0000 mgr.x (mgr.14150) 383 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:51.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:51 vm10 bash[20034]: cluster 2026-03-08T23:24:50.245594+0000 mgr.x (mgr.14150) 382 : cluster [DBG] pgmap v313: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:51.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:51 vm10 bash[20034]: cluster 2026-03-08T23:24:50.245594+0000 mgr.x (mgr.14150) 382 : cluster [DBG] pgmap v313: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:51.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:51 vm10 bash[20034]: audit 2026-03-08T23:24:50.997437+0000 mgr.x (mgr.14150) 383 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:51.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:51 vm10 bash[20034]: audit 2026-03-08T23:24:50.997437+0000 mgr.x (mgr.14150) 383 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:24:52.067 INFO:teuthology.orchestra.run.vm04.stdout:Collecting cram==0.6 2026-03-08T23:24:52.109 INFO:teuthology.orchestra.run.vm04.stdout: Downloading cram-0.6-py2.py3-none-any.whl (17 kB) 2026-03-08T23:24:52.124 INFO:teuthology.orchestra.run.vm04.stdout:Installing collected packages: cram 2026-03-08T23:24:52.130 INFO:teuthology.orchestra.run.vm04.stdout:Successfully installed cram-0.6 2026-03-08T23:24:52.168 DEBUG:teuthology.orchestra.run.vm04:> rm -rf /home/ubuntu/cephtest/clone.client.1 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.1 && cd /home/ubuntu/cephtest/clone.client.1 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-08T23:24:52.172 INFO:teuthology.orchestra.run.vm04.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.1'... 2026-03-08T23:24:53.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:53 vm04 bash[19918]: cluster 2026-03-08T23:24:52.245927+0000 mgr.x (mgr.14150) 384 : cluster [DBG] pgmap v314: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:53.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:53 vm04 bash[19918]: cluster 2026-03-08T23:24:52.245927+0000 mgr.x (mgr.14150) 384 : cluster [DBG] pgmap v314: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:53 vm02 bash[17457]: cluster 2026-03-08T23:24:52.245927+0000 mgr.x (mgr.14150) 384 : cluster [DBG] pgmap v314: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:53.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:53 vm02 bash[17457]: cluster 2026-03-08T23:24:52.245927+0000 mgr.x (mgr.14150) 384 : cluster [DBG] pgmap v314: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:53.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:53 vm10 bash[20034]: cluster 2026-03-08T23:24:52.245927+0000 mgr.x (mgr.14150) 384 : cluster [DBG] pgmap v314: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:53.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:53 vm10 bash[20034]: cluster 2026-03-08T23:24:52.245927+0000 mgr.x (mgr.14150) 384 : cluster [DBG] pgmap v314: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:55.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:55 vm04 bash[19918]: cluster 2026-03-08T23:24:54.246222+0000 mgr.x (mgr.14150) 385 : cluster [DBG] pgmap v315: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:55.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:55 vm04 bash[19918]: cluster 2026-03-08T23:24:54.246222+0000 mgr.x (mgr.14150) 385 : cluster [DBG] pgmap v315: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:55.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:55 vm02 bash[17457]: cluster 2026-03-08T23:24:54.246222+0000 mgr.x (mgr.14150) 385 : cluster [DBG] pgmap v315: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:55.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:55 vm02 bash[17457]: cluster 2026-03-08T23:24:54.246222+0000 mgr.x (mgr.14150) 385 : cluster [DBG] pgmap v315: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:55.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:55 vm10 bash[20034]: cluster 2026-03-08T23:24:54.246222+0000 mgr.x (mgr.14150) 385 : cluster [DBG] pgmap v315: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:55.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:55 vm10 bash[20034]: cluster 2026-03-08T23:24:54.246222+0000 mgr.x (mgr.14150) 385 : cluster [DBG] pgmap v315: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:57.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:57 vm04 bash[19918]: cluster 2026-03-08T23:24:56.246523+0000 mgr.x (mgr.14150) 386 : cluster [DBG] pgmap v316: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:57.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:57 vm04 bash[19918]: cluster 2026-03-08T23:24:56.246523+0000 mgr.x (mgr.14150) 386 : cluster [DBG] pgmap v316: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:57.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:57 vm02 bash[17457]: cluster 2026-03-08T23:24:56.246523+0000 mgr.x (mgr.14150) 386 : cluster [DBG] pgmap v316: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:57.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:57 vm02 bash[17457]: cluster 2026-03-08T23:24:56.246523+0000 mgr.x (mgr.14150) 386 : cluster [DBG] pgmap v316: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:57.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:57 vm10 bash[20034]: cluster 2026-03-08T23:24:56.246523+0000 mgr.x (mgr.14150) 386 : cluster [DBG] pgmap v316: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:57.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:57 vm10 bash[20034]: cluster 2026-03-08T23:24:56.246523+0000 mgr.x (mgr.14150) 386 : cluster [DBG] pgmap v316: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:24:59.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:59 vm04 bash[19918]: cluster 2026-03-08T23:24:58.246760+0000 mgr.x (mgr.14150) 387 : cluster [DBG] pgmap v317: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:59.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:24:59 vm04 bash[19918]: cluster 2026-03-08T23:24:58.246760+0000 mgr.x (mgr.14150) 387 : cluster [DBG] pgmap v317: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:59.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:59 vm02 bash[17457]: cluster 2026-03-08T23:24:58.246760+0000 mgr.x (mgr.14150) 387 : cluster [DBG] pgmap v317: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:59.901 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:24:59 vm02 bash[17457]: cluster 2026-03-08T23:24:58.246760+0000 mgr.x (mgr.14150) 387 : cluster [DBG] pgmap v317: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:59 vm10 bash[20034]: cluster 2026-03-08T23:24:58.246760+0000 mgr.x (mgr.14150) 387 : cluster [DBG] pgmap v317: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:24:59.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:24:59 vm10 bash[20034]: cluster 2026-03-08T23:24:58.246760+0000 mgr.x (mgr.14150) 387 : cluster [DBG] pgmap v317: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:00.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:25:00 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:25:00.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:00 vm04 bash[19918]: audit 2026-03-08T23:25:00.074950+0000 mgr.x (mgr.14150) 388 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:00.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:00 vm04 bash[19918]: audit 2026-03-08T23:25:00.074950+0000 mgr.x (mgr.14150) 388 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:00.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:00 vm02 bash[17457]: audit 2026-03-08T23:25:00.074950+0000 mgr.x (mgr.14150) 388 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:00.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:00 vm02 bash[17457]: audit 2026-03-08T23:25:00.074950+0000 mgr.x (mgr.14150) 388 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:00.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:00 vm10 bash[20034]: audit 2026-03-08T23:25:00.074950+0000 mgr.x (mgr.14150) 388 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:00.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:00 vm10 bash[20034]: audit 2026-03-08T23:25:00.074950+0000 mgr.x (mgr.14150) 388 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:01.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:25:01 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:25:01.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:01 vm04 bash[19918]: cluster 2026-03-08T23:25:00.247028+0000 mgr.x (mgr.14150) 389 : cluster [DBG] pgmap v318: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:01.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:01 vm04 bash[19918]: cluster 2026-03-08T23:25:00.247028+0000 mgr.x (mgr.14150) 389 : cluster [DBG] pgmap v318: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:01.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:01 vm04 bash[19918]: audit 2026-03-08T23:25:01.008137+0000 mgr.x (mgr.14150) 390 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:01.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:01 vm04 bash[19918]: audit 2026-03-08T23:25:01.008137+0000 mgr.x (mgr.14150) 390 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:01.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:01 vm02 bash[17457]: cluster 2026-03-08T23:25:00.247028+0000 mgr.x (mgr.14150) 389 : cluster [DBG] pgmap v318: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:01.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:01 vm02 bash[17457]: cluster 2026-03-08T23:25:00.247028+0000 mgr.x (mgr.14150) 389 : cluster [DBG] pgmap v318: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:01.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:01 vm02 bash[17457]: audit 2026-03-08T23:25:01.008137+0000 mgr.x (mgr.14150) 390 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:01.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:01 vm02 bash[17457]: audit 2026-03-08T23:25:01.008137+0000 mgr.x (mgr.14150) 390 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:01.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:01 vm10 bash[20034]: cluster 2026-03-08T23:25:00.247028+0000 mgr.x (mgr.14150) 389 : cluster [DBG] pgmap v318: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:01.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:01 vm10 bash[20034]: cluster 2026-03-08T23:25:00.247028+0000 mgr.x (mgr.14150) 389 : cluster [DBG] pgmap v318: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:01.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:01 vm10 bash[20034]: audit 2026-03-08T23:25:01.008137+0000 mgr.x (mgr.14150) 390 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:01.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:01 vm10 bash[20034]: audit 2026-03-08T23:25:01.008137+0000 mgr.x (mgr.14150) 390 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:03.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:03 vm04 bash[19918]: cluster 2026-03-08T23:25:02.247301+0000 mgr.x (mgr.14150) 391 : cluster [DBG] pgmap v319: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:03.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:03 vm04 bash[19918]: cluster 2026-03-08T23:25:02.247301+0000 mgr.x (mgr.14150) 391 : cluster [DBG] pgmap v319: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:03.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:03 vm02 bash[17457]: cluster 2026-03-08T23:25:02.247301+0000 mgr.x (mgr.14150) 391 : cluster [DBG] pgmap v319: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:03.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:03 vm02 bash[17457]: cluster 2026-03-08T23:25:02.247301+0000 mgr.x (mgr.14150) 391 : cluster [DBG] pgmap v319: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:03.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:03 vm10 bash[20034]: cluster 2026-03-08T23:25:02.247301+0000 mgr.x (mgr.14150) 391 : cluster [DBG] pgmap v319: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:03.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:03 vm10 bash[20034]: cluster 2026-03-08T23:25:02.247301+0000 mgr.x (mgr.14150) 391 : cluster [DBG] pgmap v319: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:06.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:05 vm04 bash[19918]: cluster 2026-03-08T23:25:04.247548+0000 mgr.x (mgr.14150) 392 : cluster [DBG] pgmap v320: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:06.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:05 vm04 bash[19918]: cluster 2026-03-08T23:25:04.247548+0000 mgr.x (mgr.14150) 392 : cluster [DBG] pgmap v320: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:06.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:05 vm02 bash[17457]: cluster 2026-03-08T23:25:04.247548+0000 mgr.x (mgr.14150) 392 : cluster [DBG] pgmap v320: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:06.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:05 vm02 bash[17457]: cluster 2026-03-08T23:25:04.247548+0000 mgr.x (mgr.14150) 392 : cluster [DBG] pgmap v320: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:06.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:05 vm10 bash[20034]: cluster 2026-03-08T23:25:04.247548+0000 mgr.x (mgr.14150) 392 : cluster [DBG] pgmap v320: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:06.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:05 vm10 bash[20034]: cluster 2026-03-08T23:25:04.247548+0000 mgr.x (mgr.14150) 392 : cluster [DBG] pgmap v320: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:08.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:07 vm04 bash[19918]: cluster 2026-03-08T23:25:06.247817+0000 mgr.x (mgr.14150) 393 : cluster [DBG] pgmap v321: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:08.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:07 vm04 bash[19918]: cluster 2026-03-08T23:25:06.247817+0000 mgr.x (mgr.14150) 393 : cluster [DBG] pgmap v321: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:08.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:07 vm02 bash[17457]: cluster 2026-03-08T23:25:06.247817+0000 mgr.x (mgr.14150) 393 : cluster [DBG] pgmap v321: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:08.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:07 vm02 bash[17457]: cluster 2026-03-08T23:25:06.247817+0000 mgr.x (mgr.14150) 393 : cluster [DBG] pgmap v321: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:08.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:07 vm10 bash[20034]: cluster 2026-03-08T23:25:06.247817+0000 mgr.x (mgr.14150) 393 : cluster [DBG] pgmap v321: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:08.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:07 vm10 bash[20034]: cluster 2026-03-08T23:25:06.247817+0000 mgr.x (mgr.14150) 393 : cluster [DBG] pgmap v321: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:10.353 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:25:10 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:25:10.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:10 vm04 bash[19918]: cluster 2026-03-08T23:25:08.248085+0000 mgr.x (mgr.14150) 394 : cluster [DBG] pgmap v322: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:10.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:10 vm04 bash[19918]: cluster 2026-03-08T23:25:08.248085+0000 mgr.x (mgr.14150) 394 : cluster [DBG] pgmap v322: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:10.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:10 vm02 bash[17457]: cluster 2026-03-08T23:25:08.248085+0000 mgr.x (mgr.14150) 394 : cluster [DBG] pgmap v322: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:10.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:10 vm02 bash[17457]: cluster 2026-03-08T23:25:08.248085+0000 mgr.x (mgr.14150) 394 : cluster [DBG] pgmap v322: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:10.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:10 vm10 bash[20034]: cluster 2026-03-08T23:25:08.248085+0000 mgr.x (mgr.14150) 394 : cluster [DBG] pgmap v322: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:10.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:10 vm10 bash[20034]: cluster 2026-03-08T23:25:08.248085+0000 mgr.x (mgr.14150) 394 : cluster [DBG] pgmap v322: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:11.356 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:25:11 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:25:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:11 vm04 bash[19918]: audit 2026-03-08T23:25:10.085115+0000 mgr.x (mgr.14150) 395 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:11 vm04 bash[19918]: audit 2026-03-08T23:25:10.085115+0000 mgr.x (mgr.14150) 395 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:11 vm04 bash[19918]: cluster 2026-03-08T23:25:10.248349+0000 mgr.x (mgr.14150) 396 : cluster [DBG] pgmap v323: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:11 vm04 bash[19918]: cluster 2026-03-08T23:25:10.248349+0000 mgr.x (mgr.14150) 396 : cluster [DBG] pgmap v323: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:11 vm04 bash[19918]: audit 2026-03-08T23:25:11.014021+0000 mgr.x (mgr.14150) 397 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:11 vm04 bash[19918]: audit 2026-03-08T23:25:11.014021+0000 mgr.x (mgr.14150) 397 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:11 vm02 bash[17457]: audit 2026-03-08T23:25:10.085115+0000 mgr.x (mgr.14150) 395 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:11 vm02 bash[17457]: audit 2026-03-08T23:25:10.085115+0000 mgr.x (mgr.14150) 395 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:11 vm02 bash[17457]: cluster 2026-03-08T23:25:10.248349+0000 mgr.x (mgr.14150) 396 : cluster [DBG] pgmap v323: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:11 vm02 bash[17457]: cluster 2026-03-08T23:25:10.248349+0000 mgr.x (mgr.14150) 396 : cluster [DBG] pgmap v323: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:11 vm02 bash[17457]: audit 2026-03-08T23:25:11.014021+0000 mgr.x (mgr.14150) 397 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:11 vm02 bash[17457]: audit 2026-03-08T23:25:11.014021+0000 mgr.x (mgr.14150) 397 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:11.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:11 vm10 bash[20034]: audit 2026-03-08T23:25:10.085115+0000 mgr.x (mgr.14150) 395 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:11.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:11 vm10 bash[20034]: audit 2026-03-08T23:25:10.085115+0000 mgr.x (mgr.14150) 395 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:11.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:11 vm10 bash[20034]: cluster 2026-03-08T23:25:10.248349+0000 mgr.x (mgr.14150) 396 : cluster [DBG] pgmap v323: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:11.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:11 vm10 bash[20034]: cluster 2026-03-08T23:25:10.248349+0000 mgr.x (mgr.14150) 396 : cluster [DBG] pgmap v323: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:11.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:11 vm10 bash[20034]: audit 2026-03-08T23:25:11.014021+0000 mgr.x (mgr.14150) 397 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:11.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:11 vm10 bash[20034]: audit 2026-03-08T23:25:11.014021+0000 mgr.x (mgr.14150) 397 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:13.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:13 vm04 bash[19918]: cluster 2026-03-08T23:25:12.248616+0000 mgr.x (mgr.14150) 398 : cluster [DBG] pgmap v324: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:13.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:13 vm04 bash[19918]: cluster 2026-03-08T23:25:12.248616+0000 mgr.x (mgr.14150) 398 : cluster [DBG] pgmap v324: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:13.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:13 vm02 bash[17457]: cluster 2026-03-08T23:25:12.248616+0000 mgr.x (mgr.14150) 398 : cluster [DBG] pgmap v324: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:13.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:13 vm02 bash[17457]: cluster 2026-03-08T23:25:12.248616+0000 mgr.x (mgr.14150) 398 : cluster [DBG] pgmap v324: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:13.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:13 vm10 bash[20034]: cluster 2026-03-08T23:25:12.248616+0000 mgr.x (mgr.14150) 398 : cluster [DBG] pgmap v324: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:13.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:13 vm10 bash[20034]: cluster 2026-03-08T23:25:12.248616+0000 mgr.x (mgr.14150) 398 : cluster [DBG] pgmap v324: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:16.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:15 vm04 bash[19918]: cluster 2026-03-08T23:25:14.248834+0000 mgr.x (mgr.14150) 399 : cluster [DBG] pgmap v325: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:16.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:15 vm04 bash[19918]: cluster 2026-03-08T23:25:14.248834+0000 mgr.x (mgr.14150) 399 : cluster [DBG] pgmap v325: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:15 vm02 bash[17457]: cluster 2026-03-08T23:25:14.248834+0000 mgr.x (mgr.14150) 399 : cluster [DBG] pgmap v325: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:16.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:15 vm02 bash[17457]: cluster 2026-03-08T23:25:14.248834+0000 mgr.x (mgr.14150) 399 : cluster [DBG] pgmap v325: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:15 vm10 bash[20034]: cluster 2026-03-08T23:25:14.248834+0000 mgr.x (mgr.14150) 399 : cluster [DBG] pgmap v325: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:16.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:15 vm10 bash[20034]: cluster 2026-03-08T23:25:14.248834+0000 mgr.x (mgr.14150) 399 : cluster [DBG] pgmap v325: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:17.130 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:16 vm04 bash[19918]: audit 2026-03-08T23:25:16.583669+0000 mon.a (mon.0) 749 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:25:17.130 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:16 vm04 bash[19918]: audit 2026-03-08T23:25:16.583669+0000 mon.a (mon.0) 749 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:25:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:16 vm02 bash[17457]: audit 2026-03-08T23:25:16.583669+0000 mon.a (mon.0) 749 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:25:17.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:16 vm02 bash[17457]: audit 2026-03-08T23:25:16.583669+0000 mon.a (mon.0) 749 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:25:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:16 vm10 bash[20034]: audit 2026-03-08T23:25:16.583669+0000 mon.a (mon.0) 749 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:25:17.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:16 vm10 bash[20034]: audit 2026-03-08T23:25:16.583669+0000 mon.a (mon.0) 749 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:25:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:17 vm04 bash[19918]: cluster 2026-03-08T23:25:16.249096+0000 mgr.x (mgr.14150) 400 : cluster [DBG] pgmap v326: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:17 vm04 bash[19918]: cluster 2026-03-08T23:25:16.249096+0000 mgr.x (mgr.14150) 400 : cluster [DBG] pgmap v326: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:17 vm04 bash[19918]: audit 2026-03-08T23:25:17.068129+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:25:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:17 vm04 bash[19918]: audit 2026-03-08T23:25:17.068129+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:25:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:17 vm04 bash[19918]: audit 2026-03-08T23:25:17.068645+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:25:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:17 vm04 bash[19918]: audit 2026-03-08T23:25:17.068645+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:25:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:17 vm04 bash[19918]: audit 2026-03-08T23:25:17.133216+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:25:18.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:17 vm04 bash[19918]: audit 2026-03-08T23:25:17.133216+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:25:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:17 vm02 bash[17457]: cluster 2026-03-08T23:25:16.249096+0000 mgr.x (mgr.14150) 400 : cluster [DBG] pgmap v326: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:17 vm02 bash[17457]: cluster 2026-03-08T23:25:16.249096+0000 mgr.x (mgr.14150) 400 : cluster [DBG] pgmap v326: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:17 vm02 bash[17457]: audit 2026-03-08T23:25:17.068129+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:25:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:17 vm02 bash[17457]: audit 2026-03-08T23:25:17.068129+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:25:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:17 vm02 bash[17457]: audit 2026-03-08T23:25:17.068645+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:25:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:17 vm02 bash[17457]: audit 2026-03-08T23:25:17.068645+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:25:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:17 vm02 bash[17457]: audit 2026-03-08T23:25:17.133216+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:25:18.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:17 vm02 bash[17457]: audit 2026-03-08T23:25:17.133216+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:25:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:17 vm10 bash[20034]: cluster 2026-03-08T23:25:16.249096+0000 mgr.x (mgr.14150) 400 : cluster [DBG] pgmap v326: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:17 vm10 bash[20034]: cluster 2026-03-08T23:25:16.249096+0000 mgr.x (mgr.14150) 400 : cluster [DBG] pgmap v326: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:17 vm10 bash[20034]: audit 2026-03-08T23:25:17.068129+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:25:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:17 vm10 bash[20034]: audit 2026-03-08T23:25:17.068129+0000 mon.a (mon.0) 750 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:25:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:17 vm10 bash[20034]: audit 2026-03-08T23:25:17.068645+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:25:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:17 vm10 bash[20034]: audit 2026-03-08T23:25:17.068645+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:25:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:17 vm10 bash[20034]: audit 2026-03-08T23:25:17.133216+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:25:18.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:17 vm10 bash[20034]: audit 2026-03-08T23:25:17.133216+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:25:20.093 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:19 vm02 bash[17457]: cluster 2026-03-08T23:25:18.249338+0000 mgr.x (mgr.14150) 401 : cluster [DBG] pgmap v327: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:20.093 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:19 vm02 bash[17457]: cluster 2026-03-08T23:25:18.249338+0000 mgr.x (mgr.14150) 401 : cluster [DBG] pgmap v327: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:20.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:19 vm04 bash[19918]: cluster 2026-03-08T23:25:18.249338+0000 mgr.x (mgr.14150) 401 : cluster [DBG] pgmap v327: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:20.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:19 vm04 bash[19918]: cluster 2026-03-08T23:25:18.249338+0000 mgr.x (mgr.14150) 401 : cluster [DBG] pgmap v327: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:20.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:19 vm10 bash[20034]: cluster 2026-03-08T23:25:18.249338+0000 mgr.x (mgr.14150) 401 : cluster [DBG] pgmap v327: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:20.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:19 vm10 bash[20034]: cluster 2026-03-08T23:25:18.249338+0000 mgr.x (mgr.14150) 401 : cluster [DBG] pgmap v327: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:20.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:25:20 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:25:21.024 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:20 vm10 bash[20034]: audit 2026-03-08T23:25:20.093742+0000 mgr.x (mgr.14150) 402 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:21.024 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:20 vm10 bash[20034]: audit 2026-03-08T23:25:20.093742+0000 mgr.x (mgr.14150) 402 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:21.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:20 vm04 bash[19918]: audit 2026-03-08T23:25:20.093742+0000 mgr.x (mgr.14150) 402 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:21.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:20 vm04 bash[19918]: audit 2026-03-08T23:25:20.093742+0000 mgr.x (mgr.14150) 402 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:21.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:20 vm02 bash[17457]: audit 2026-03-08T23:25:20.093742+0000 mgr.x (mgr.14150) 402 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:21.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:20 vm02 bash[17457]: audit 2026-03-08T23:25:20.093742+0000 mgr.x (mgr.14150) 402 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:21.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:25:21 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:25:22.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:22 vm04 bash[19918]: cluster 2026-03-08T23:25:20.249612+0000 mgr.x (mgr.14150) 403 : cluster [DBG] pgmap v328: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:22.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:22 vm04 bash[19918]: cluster 2026-03-08T23:25:20.249612+0000 mgr.x (mgr.14150) 403 : cluster [DBG] pgmap v328: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:22.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:22 vm04 bash[19918]: audit 2026-03-08T23:25:21.024659+0000 mgr.x (mgr.14150) 404 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:22.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:22 vm04 bash[19918]: audit 2026-03-08T23:25:21.024659+0000 mgr.x (mgr.14150) 404 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:22.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:22 vm02 bash[17457]: cluster 2026-03-08T23:25:20.249612+0000 mgr.x (mgr.14150) 403 : cluster [DBG] pgmap v328: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:22.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:22 vm02 bash[17457]: cluster 2026-03-08T23:25:20.249612+0000 mgr.x (mgr.14150) 403 : cluster [DBG] pgmap v328: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:22.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:22 vm02 bash[17457]: audit 2026-03-08T23:25:21.024659+0000 mgr.x (mgr.14150) 404 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:22.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:22 vm02 bash[17457]: audit 2026-03-08T23:25:21.024659+0000 mgr.x (mgr.14150) 404 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:22.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:22 vm10 bash[20034]: cluster 2026-03-08T23:25:20.249612+0000 mgr.x (mgr.14150) 403 : cluster [DBG] pgmap v328: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:22.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:22 vm10 bash[20034]: cluster 2026-03-08T23:25:20.249612+0000 mgr.x (mgr.14150) 403 : cluster [DBG] pgmap v328: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:22.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:22 vm10 bash[20034]: audit 2026-03-08T23:25:21.024659+0000 mgr.x (mgr.14150) 404 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:22.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:22 vm10 bash[20034]: audit 2026-03-08T23:25:21.024659+0000 mgr.x (mgr.14150) 404 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:23.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:23 vm04 bash[19918]: cluster 2026-03-08T23:25:22.249856+0000 mgr.x (mgr.14150) 405 : cluster [DBG] pgmap v329: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:23.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:23 vm04 bash[19918]: cluster 2026-03-08T23:25:22.249856+0000 mgr.x (mgr.14150) 405 : cluster [DBG] pgmap v329: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:23.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:23 vm02 bash[17457]: cluster 2026-03-08T23:25:22.249856+0000 mgr.x (mgr.14150) 405 : cluster [DBG] pgmap v329: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:23.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:23 vm02 bash[17457]: cluster 2026-03-08T23:25:22.249856+0000 mgr.x (mgr.14150) 405 : cluster [DBG] pgmap v329: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:23.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:23 vm10 bash[20034]: cluster 2026-03-08T23:25:22.249856+0000 mgr.x (mgr.14150) 405 : cluster [DBG] pgmap v329: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:23.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:23 vm10 bash[20034]: cluster 2026-03-08T23:25:22.249856+0000 mgr.x (mgr.14150) 405 : cluster [DBG] pgmap v329: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:25.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:25 vm04 bash[19918]: cluster 2026-03-08T23:25:24.250091+0000 mgr.x (mgr.14150) 406 : cluster [DBG] pgmap v330: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:25.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:25 vm04 bash[19918]: cluster 2026-03-08T23:25:24.250091+0000 mgr.x (mgr.14150) 406 : cluster [DBG] pgmap v330: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:25 vm02 bash[17457]: cluster 2026-03-08T23:25:24.250091+0000 mgr.x (mgr.14150) 406 : cluster [DBG] pgmap v330: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:25.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:25 vm02 bash[17457]: cluster 2026-03-08T23:25:24.250091+0000 mgr.x (mgr.14150) 406 : cluster [DBG] pgmap v330: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:25.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:25 vm10 bash[20034]: cluster 2026-03-08T23:25:24.250091+0000 mgr.x (mgr.14150) 406 : cluster [DBG] pgmap v330: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:25.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:25 vm10 bash[20034]: cluster 2026-03-08T23:25:24.250091+0000 mgr.x (mgr.14150) 406 : cluster [DBG] pgmap v330: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:27.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:27 vm04 bash[19918]: cluster 2026-03-08T23:25:26.250367+0000 mgr.x (mgr.14150) 407 : cluster [DBG] pgmap v331: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:27.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:27 vm04 bash[19918]: cluster 2026-03-08T23:25:26.250367+0000 mgr.x (mgr.14150) 407 : cluster [DBG] pgmap v331: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:27 vm02 bash[17457]: cluster 2026-03-08T23:25:26.250367+0000 mgr.x (mgr.14150) 407 : cluster [DBG] pgmap v331: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:27.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:27 vm02 bash[17457]: cluster 2026-03-08T23:25:26.250367+0000 mgr.x (mgr.14150) 407 : cluster [DBG] pgmap v331: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:27 vm10 bash[20034]: cluster 2026-03-08T23:25:26.250367+0000 mgr.x (mgr.14150) 407 : cluster [DBG] pgmap v331: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:27.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:27 vm10 bash[20034]: cluster 2026-03-08T23:25:26.250367+0000 mgr.x (mgr.14150) 407 : cluster [DBG] pgmap v331: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:29.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:29 vm04 bash[19918]: cluster 2026-03-08T23:25:28.250610+0000 mgr.x (mgr.14150) 408 : cluster [DBG] pgmap v332: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:29.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:29 vm04 bash[19918]: cluster 2026-03-08T23:25:28.250610+0000 mgr.x (mgr.14150) 408 : cluster [DBG] pgmap v332: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:29.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:29 vm02 bash[17457]: cluster 2026-03-08T23:25:28.250610+0000 mgr.x (mgr.14150) 408 : cluster [DBG] pgmap v332: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:29.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:29 vm02 bash[17457]: cluster 2026-03-08T23:25:28.250610+0000 mgr.x (mgr.14150) 408 : cluster [DBG] pgmap v332: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:29.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:29 vm10 bash[20034]: cluster 2026-03-08T23:25:28.250610+0000 mgr.x (mgr.14150) 408 : cluster [DBG] pgmap v332: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:29.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:29 vm10 bash[20034]: cluster 2026-03-08T23:25:28.250610+0000 mgr.x (mgr.14150) 408 : cluster [DBG] pgmap v332: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:30.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:25:30 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:25:30.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:30 vm04 bash[19918]: audit 2026-03-08T23:25:30.103260+0000 mgr.x (mgr.14150) 409 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:30.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:30 vm04 bash[19918]: audit 2026-03-08T23:25:30.103260+0000 mgr.x (mgr.14150) 409 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:30.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:30 vm02 bash[17457]: audit 2026-03-08T23:25:30.103260+0000 mgr.x (mgr.14150) 409 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:30.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:30 vm02 bash[17457]: audit 2026-03-08T23:25:30.103260+0000 mgr.x (mgr.14150) 409 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:30.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:30 vm10 bash[20034]: audit 2026-03-08T23:25:30.103260+0000 mgr.x (mgr.14150) 409 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:30.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:30 vm10 bash[20034]: audit 2026-03-08T23:25:30.103260+0000 mgr.x (mgr.14150) 409 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:31.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:25:31 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:25:31.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:31 vm04 bash[19918]: cluster 2026-03-08T23:25:30.250900+0000 mgr.x (mgr.14150) 410 : cluster [DBG] pgmap v333: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:31.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:31 vm04 bash[19918]: cluster 2026-03-08T23:25:30.250900+0000 mgr.x (mgr.14150) 410 : cluster [DBG] pgmap v333: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:31.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:31 vm04 bash[19918]: audit 2026-03-08T23:25:31.035336+0000 mgr.x (mgr.14150) 411 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:31.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:31 vm04 bash[19918]: audit 2026-03-08T23:25:31.035336+0000 mgr.x (mgr.14150) 411 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:31 vm02 bash[17457]: cluster 2026-03-08T23:25:30.250900+0000 mgr.x (mgr.14150) 410 : cluster [DBG] pgmap v333: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:31 vm02 bash[17457]: cluster 2026-03-08T23:25:30.250900+0000 mgr.x (mgr.14150) 410 : cluster [DBG] pgmap v333: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:31 vm02 bash[17457]: audit 2026-03-08T23:25:31.035336+0000 mgr.x (mgr.14150) 411 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:31.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:31 vm02 bash[17457]: audit 2026-03-08T23:25:31.035336+0000 mgr.x (mgr.14150) 411 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:31.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:31 vm10 bash[20034]: cluster 2026-03-08T23:25:30.250900+0000 mgr.x (mgr.14150) 410 : cluster [DBG] pgmap v333: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:31.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:31 vm10 bash[20034]: cluster 2026-03-08T23:25:30.250900+0000 mgr.x (mgr.14150) 410 : cluster [DBG] pgmap v333: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:31.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:31 vm10 bash[20034]: audit 2026-03-08T23:25:31.035336+0000 mgr.x (mgr.14150) 411 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:31.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:31 vm10 bash[20034]: audit 2026-03-08T23:25:31.035336+0000 mgr.x (mgr.14150) 411 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:33.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:33 vm04 bash[19918]: cluster 2026-03-08T23:25:32.251148+0000 mgr.x (mgr.14150) 412 : cluster [DBG] pgmap v334: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:33.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:33 vm04 bash[19918]: cluster 2026-03-08T23:25:32.251148+0000 mgr.x (mgr.14150) 412 : cluster [DBG] pgmap v334: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:33 vm02 bash[17457]: cluster 2026-03-08T23:25:32.251148+0000 mgr.x (mgr.14150) 412 : cluster [DBG] pgmap v334: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:33 vm02 bash[17457]: cluster 2026-03-08T23:25:32.251148+0000 mgr.x (mgr.14150) 412 : cluster [DBG] pgmap v334: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:33.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:33 vm10 bash[20034]: cluster 2026-03-08T23:25:32.251148+0000 mgr.x (mgr.14150) 412 : cluster [DBG] pgmap v334: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:33.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:33 vm10 bash[20034]: cluster 2026-03-08T23:25:32.251148+0000 mgr.x (mgr.14150) 412 : cluster [DBG] pgmap v334: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:35.830 INFO:teuthology.orchestra.run.vm04.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-08T23:25:35.830 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-08T23:25:35.830 INFO:teuthology.orchestra.run.vm04.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-08T23:25:35.830 INFO:teuthology.orchestra.run.vm04.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-08T23:25:35.830 INFO:teuthology.orchestra.run.vm04.stderr:state without impacting any branches by switching back to a branch. 2026-03-08T23:25:35.830 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-08T23:25:35.830 INFO:teuthology.orchestra.run.vm04.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-08T23:25:35.830 INFO:teuthology.orchestra.run.vm04.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-08T23:25:35.830 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-08T23:25:35.830 INFO:teuthology.orchestra.run.vm04.stderr: git switch -c 2026-03-08T23:25:35.830 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-08T23:25:35.830 INFO:teuthology.orchestra.run.vm04.stderr:Or undo this operation with: 2026-03-08T23:25:35.831 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-08T23:25:35.831 INFO:teuthology.orchestra.run.vm04.stderr: git switch - 2026-03-08T23:25:35.831 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-08T23:25:35.831 INFO:teuthology.orchestra.run.vm04.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-08T23:25:35.831 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-08T23:25:35.831 INFO:teuthology.orchestra.run.vm04.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-08T23:25:35.836 DEBUG:teuthology.orchestra.run.vm04:> cp -- /home/ubuntu/cephtest/clone.client.1/src/test/cli-integration/rbd/iscsi_client.t /home/ubuntu/cephtest/archive/cram.client.1 2026-03-08T23:25:35.844 DEBUG:teuthology.orchestra.run.vm10:> mkdir -- /home/ubuntu/cephtest/archive/cram.client.2 && python3 -m venv /home/ubuntu/cephtest/virtualenv && /home/ubuntu/cephtest/virtualenv/bin/pip install cram==0.6 2026-03-08T23:25:35.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:35 vm04 bash[19918]: cluster 2026-03-08T23:25:34.251407+0000 mgr.x (mgr.14150) 413 : cluster [DBG] pgmap v335: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:35.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:35 vm04 bash[19918]: cluster 2026-03-08T23:25:34.251407+0000 mgr.x (mgr.14150) 413 : cluster [DBG] pgmap v335: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:35 vm02 bash[17457]: cluster 2026-03-08T23:25:34.251407+0000 mgr.x (mgr.14150) 413 : cluster [DBG] pgmap v335: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:35.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:35 vm02 bash[17457]: cluster 2026-03-08T23:25:34.251407+0000 mgr.x (mgr.14150) 413 : cluster [DBG] pgmap v335: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:35.906 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:35 vm10 bash[20034]: cluster 2026-03-08T23:25:34.251407+0000 mgr.x (mgr.14150) 413 : cluster [DBG] pgmap v335: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:35.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:35 vm10 bash[20034]: cluster 2026-03-08T23:25:34.251407+0000 mgr.x (mgr.14150) 413 : cluster [DBG] pgmap v335: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:37.545 INFO:teuthology.orchestra.run.vm10.stdout:Collecting cram==0.6 2026-03-08T23:25:37.589 INFO:teuthology.orchestra.run.vm10.stdout: Downloading cram-0.6-py2.py3-none-any.whl (17 kB) 2026-03-08T23:25:37.603 INFO:teuthology.orchestra.run.vm10.stdout:Installing collected packages: cram 2026-03-08T23:25:37.609 INFO:teuthology.orchestra.run.vm10.stdout:Successfully installed cram-0.6 2026-03-08T23:25:37.641 DEBUG:teuthology.orchestra.run.vm10:> rm -rf /home/ubuntu/cephtest/clone.client.2 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.2 && cd /home/ubuntu/cephtest/clone.client.2 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-08T23:25:37.644 INFO:teuthology.orchestra.run.vm10.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.2'... 2026-03-08T23:25:37.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:37 vm04 bash[19918]: cluster 2026-03-08T23:25:36.251681+0000 mgr.x (mgr.14150) 414 : cluster [DBG] pgmap v336: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:37.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:37 vm04 bash[19918]: cluster 2026-03-08T23:25:36.251681+0000 mgr.x (mgr.14150) 414 : cluster [DBG] pgmap v336: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:37 vm02 bash[17457]: cluster 2026-03-08T23:25:36.251681+0000 mgr.x (mgr.14150) 414 : cluster [DBG] pgmap v336: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:37 vm02 bash[17457]: cluster 2026-03-08T23:25:36.251681+0000 mgr.x (mgr.14150) 414 : cluster [DBG] pgmap v336: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:37 vm10 bash[20034]: cluster 2026-03-08T23:25:36.251681+0000 mgr.x (mgr.14150) 414 : cluster [DBG] pgmap v336: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:37.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:37 vm10 bash[20034]: cluster 2026-03-08T23:25:36.251681+0000 mgr.x (mgr.14150) 414 : cluster [DBG] pgmap v336: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:39.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:39 vm04 bash[19918]: cluster 2026-03-08T23:25:38.251893+0000 mgr.x (mgr.14150) 415 : cluster [DBG] pgmap v337: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:39.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:39 vm04 bash[19918]: cluster 2026-03-08T23:25:38.251893+0000 mgr.x (mgr.14150) 415 : cluster [DBG] pgmap v337: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:39 vm02 bash[17457]: cluster 2026-03-08T23:25:38.251893+0000 mgr.x (mgr.14150) 415 : cluster [DBG] pgmap v337: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:39.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:39 vm02 bash[17457]: cluster 2026-03-08T23:25:38.251893+0000 mgr.x (mgr.14150) 415 : cluster [DBG] pgmap v337: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:39 vm10 bash[20034]: cluster 2026-03-08T23:25:38.251893+0000 mgr.x (mgr.14150) 415 : cluster [DBG] pgmap v337: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:39.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:39 vm10 bash[20034]: cluster 2026-03-08T23:25:38.251893+0000 mgr.x (mgr.14150) 415 : cluster [DBG] pgmap v337: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:40.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:25:40 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:25:40.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:40 vm04 bash[19918]: audit 2026-03-08T23:25:40.106492+0000 mgr.x (mgr.14150) 416 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:40.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:40 vm04 bash[19918]: audit 2026-03-08T23:25:40.106492+0000 mgr.x (mgr.14150) 416 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:40 vm02 bash[17457]: audit 2026-03-08T23:25:40.106492+0000 mgr.x (mgr.14150) 416 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:40 vm02 bash[17457]: audit 2026-03-08T23:25:40.106492+0000 mgr.x (mgr.14150) 416 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:40.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:40 vm10 bash[20034]: audit 2026-03-08T23:25:40.106492+0000 mgr.x (mgr.14150) 416 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:40.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:40 vm10 bash[20034]: audit 2026-03-08T23:25:40.106492+0000 mgr.x (mgr.14150) 416 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:41.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:25:41 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:25:41.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:41 vm04 bash[19918]: cluster 2026-03-08T23:25:40.252123+0000 mgr.x (mgr.14150) 417 : cluster [DBG] pgmap v338: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:41.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:41 vm04 bash[19918]: cluster 2026-03-08T23:25:40.252123+0000 mgr.x (mgr.14150) 417 : cluster [DBG] pgmap v338: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:41.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:41 vm04 bash[19918]: audit 2026-03-08T23:25:41.042598+0000 mgr.x (mgr.14150) 418 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:41.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:41 vm04 bash[19918]: audit 2026-03-08T23:25:41.042598+0000 mgr.x (mgr.14150) 418 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:41 vm02 bash[17457]: cluster 2026-03-08T23:25:40.252123+0000 mgr.x (mgr.14150) 417 : cluster [DBG] pgmap v338: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:41 vm02 bash[17457]: cluster 2026-03-08T23:25:40.252123+0000 mgr.x (mgr.14150) 417 : cluster [DBG] pgmap v338: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:41 vm02 bash[17457]: audit 2026-03-08T23:25:41.042598+0000 mgr.x (mgr.14150) 418 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:41.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:41 vm02 bash[17457]: audit 2026-03-08T23:25:41.042598+0000 mgr.x (mgr.14150) 418 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:41 vm10 bash[20034]: cluster 2026-03-08T23:25:40.252123+0000 mgr.x (mgr.14150) 417 : cluster [DBG] pgmap v338: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:41 vm10 bash[20034]: cluster 2026-03-08T23:25:40.252123+0000 mgr.x (mgr.14150) 417 : cluster [DBG] pgmap v338: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:41 vm10 bash[20034]: audit 2026-03-08T23:25:41.042598+0000 mgr.x (mgr.14150) 418 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:41.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:41 vm10 bash[20034]: audit 2026-03-08T23:25:41.042598+0000 mgr.x (mgr.14150) 418 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:43.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:43 vm04 bash[19918]: cluster 2026-03-08T23:25:42.252359+0000 mgr.x (mgr.14150) 419 : cluster [DBG] pgmap v339: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:43.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:43 vm04 bash[19918]: cluster 2026-03-08T23:25:42.252359+0000 mgr.x (mgr.14150) 419 : cluster [DBG] pgmap v339: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:43.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:43 vm02 bash[17457]: cluster 2026-03-08T23:25:42.252359+0000 mgr.x (mgr.14150) 419 : cluster [DBG] pgmap v339: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:43.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:43 vm02 bash[17457]: cluster 2026-03-08T23:25:42.252359+0000 mgr.x (mgr.14150) 419 : cluster [DBG] pgmap v339: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:43 vm10 bash[20034]: cluster 2026-03-08T23:25:42.252359+0000 mgr.x (mgr.14150) 419 : cluster [DBG] pgmap v339: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:43.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:43 vm10 bash[20034]: cluster 2026-03-08T23:25:42.252359+0000 mgr.x (mgr.14150) 419 : cluster [DBG] pgmap v339: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:45.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:45 vm04 bash[19918]: cluster 2026-03-08T23:25:44.252582+0000 mgr.x (mgr.14150) 420 : cluster [DBG] pgmap v340: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:45.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:45 vm04 bash[19918]: cluster 2026-03-08T23:25:44.252582+0000 mgr.x (mgr.14150) 420 : cluster [DBG] pgmap v340: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:45 vm02 bash[17457]: cluster 2026-03-08T23:25:44.252582+0000 mgr.x (mgr.14150) 420 : cluster [DBG] pgmap v340: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:45.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:45 vm02 bash[17457]: cluster 2026-03-08T23:25:44.252582+0000 mgr.x (mgr.14150) 420 : cluster [DBG] pgmap v340: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:45 vm10 bash[20034]: cluster 2026-03-08T23:25:44.252582+0000 mgr.x (mgr.14150) 420 : cluster [DBG] pgmap v340: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:45.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:45 vm10 bash[20034]: cluster 2026-03-08T23:25:44.252582+0000 mgr.x (mgr.14150) 420 : cluster [DBG] pgmap v340: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:47.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:47 vm04 bash[19918]: cluster 2026-03-08T23:25:46.252855+0000 mgr.x (mgr.14150) 421 : cluster [DBG] pgmap v341: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:47.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:47 vm04 bash[19918]: cluster 2026-03-08T23:25:46.252855+0000 mgr.x (mgr.14150) 421 : cluster [DBG] pgmap v341: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:47 vm02 bash[17457]: cluster 2026-03-08T23:25:46.252855+0000 mgr.x (mgr.14150) 421 : cluster [DBG] pgmap v341: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:47.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:47 vm02 bash[17457]: cluster 2026-03-08T23:25:46.252855+0000 mgr.x (mgr.14150) 421 : cluster [DBG] pgmap v341: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:47 vm10 bash[20034]: cluster 2026-03-08T23:25:46.252855+0000 mgr.x (mgr.14150) 421 : cluster [DBG] pgmap v341: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:47.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:47 vm10 bash[20034]: cluster 2026-03-08T23:25:46.252855+0000 mgr.x (mgr.14150) 421 : cluster [DBG] pgmap v341: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:50.110 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:49 vm02 bash[17457]: cluster 2026-03-08T23:25:48.253095+0000 mgr.x (mgr.14150) 422 : cluster [DBG] pgmap v342: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:50.110 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:49 vm02 bash[17457]: cluster 2026-03-08T23:25:48.253095+0000 mgr.x (mgr.14150) 422 : cluster [DBG] pgmap v342: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:50.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:49 vm04 bash[19918]: cluster 2026-03-08T23:25:48.253095+0000 mgr.x (mgr.14150) 422 : cluster [DBG] pgmap v342: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:50.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:49 vm04 bash[19918]: cluster 2026-03-08T23:25:48.253095+0000 mgr.x (mgr.14150) 422 : cluster [DBG] pgmap v342: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:50.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:49 vm10 bash[20034]: cluster 2026-03-08T23:25:48.253095+0000 mgr.x (mgr.14150) 422 : cluster [DBG] pgmap v342: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:50.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:49 vm10 bash[20034]: cluster 2026-03-08T23:25:48.253095+0000 mgr.x (mgr.14150) 422 : cluster [DBG] pgmap v342: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:50.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:25:50 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:25:51.053 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:50 vm10 bash[20034]: audit 2026-03-08T23:25:50.110852+0000 mgr.x (mgr.14150) 423 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:51.053 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:50 vm10 bash[20034]: audit 2026-03-08T23:25:50.110852+0000 mgr.x (mgr.14150) 423 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:51.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:50 vm04 bash[19918]: audit 2026-03-08T23:25:50.110852+0000 mgr.x (mgr.14150) 423 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:51.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:50 vm04 bash[19918]: audit 2026-03-08T23:25:50.110852+0000 mgr.x (mgr.14150) 423 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:51.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:50 vm02 bash[17457]: audit 2026-03-08T23:25:50.110852+0000 mgr.x (mgr.14150) 423 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:51.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:50 vm02 bash[17457]: audit 2026-03-08T23:25:50.110852+0000 mgr.x (mgr.14150) 423 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:51.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:25:51 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:25:52.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:51 vm04 bash[19918]: cluster 2026-03-08T23:25:50.253333+0000 mgr.x (mgr.14150) 424 : cluster [DBG] pgmap v343: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:52.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:51 vm04 bash[19918]: cluster 2026-03-08T23:25:50.253333+0000 mgr.x (mgr.14150) 424 : cluster [DBG] pgmap v343: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:52.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:51 vm04 bash[19918]: audit 2026-03-08T23:25:51.053183+0000 mgr.x (mgr.14150) 425 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:52.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:51 vm04 bash[19918]: audit 2026-03-08T23:25:51.053183+0000 mgr.x (mgr.14150) 425 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:52.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:51 vm02 bash[17457]: cluster 2026-03-08T23:25:50.253333+0000 mgr.x (mgr.14150) 424 : cluster [DBG] pgmap v343: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:52.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:51 vm02 bash[17457]: cluster 2026-03-08T23:25:50.253333+0000 mgr.x (mgr.14150) 424 : cluster [DBG] pgmap v343: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:52.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:51 vm02 bash[17457]: audit 2026-03-08T23:25:51.053183+0000 mgr.x (mgr.14150) 425 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:52.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:51 vm02 bash[17457]: audit 2026-03-08T23:25:51.053183+0000 mgr.x (mgr.14150) 425 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:52.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:51 vm10 bash[20034]: cluster 2026-03-08T23:25:50.253333+0000 mgr.x (mgr.14150) 424 : cluster [DBG] pgmap v343: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:52.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:51 vm10 bash[20034]: cluster 2026-03-08T23:25:50.253333+0000 mgr.x (mgr.14150) 424 : cluster [DBG] pgmap v343: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:52.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:51 vm10 bash[20034]: audit 2026-03-08T23:25:51.053183+0000 mgr.x (mgr.14150) 425 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:52.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:51 vm10 bash[20034]: audit 2026-03-08T23:25:51.053183+0000 mgr.x (mgr.14150) 425 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:25:54.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:53 vm04 bash[19918]: cluster 2026-03-08T23:25:52.253560+0000 mgr.x (mgr.14150) 426 : cluster [DBG] pgmap v344: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:54.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:53 vm04 bash[19918]: cluster 2026-03-08T23:25:52.253560+0000 mgr.x (mgr.14150) 426 : cluster [DBG] pgmap v344: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:53 vm02 bash[17457]: cluster 2026-03-08T23:25:52.253560+0000 mgr.x (mgr.14150) 426 : cluster [DBG] pgmap v344: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:53 vm02 bash[17457]: cluster 2026-03-08T23:25:52.253560+0000 mgr.x (mgr.14150) 426 : cluster [DBG] pgmap v344: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:54.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:53 vm10 bash[20034]: cluster 2026-03-08T23:25:52.253560+0000 mgr.x (mgr.14150) 426 : cluster [DBG] pgmap v344: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:54.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:53 vm10 bash[20034]: cluster 2026-03-08T23:25:52.253560+0000 mgr.x (mgr.14150) 426 : cluster [DBG] pgmap v344: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:56.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:55 vm04 bash[19918]: cluster 2026-03-08T23:25:54.253796+0000 mgr.x (mgr.14150) 427 : cluster [DBG] pgmap v345: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:56.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:55 vm04 bash[19918]: cluster 2026-03-08T23:25:54.253796+0000 mgr.x (mgr.14150) 427 : cluster [DBG] pgmap v345: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:56.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:55 vm02 bash[17457]: cluster 2026-03-08T23:25:54.253796+0000 mgr.x (mgr.14150) 427 : cluster [DBG] pgmap v345: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:56.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:55 vm02 bash[17457]: cluster 2026-03-08T23:25:54.253796+0000 mgr.x (mgr.14150) 427 : cluster [DBG] pgmap v345: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:56.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:55 vm10 bash[20034]: cluster 2026-03-08T23:25:54.253796+0000 mgr.x (mgr.14150) 427 : cluster [DBG] pgmap v345: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:56.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:55 vm10 bash[20034]: cluster 2026-03-08T23:25:54.253796+0000 mgr.x (mgr.14150) 427 : cluster [DBG] pgmap v345: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:25:58.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:57 vm04 bash[19918]: cluster 2026-03-08T23:25:56.254049+0000 mgr.x (mgr.14150) 428 : cluster [DBG] pgmap v346: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:58.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:57 vm04 bash[19918]: cluster 2026-03-08T23:25:56.254049+0000 mgr.x (mgr.14150) 428 : cluster [DBG] pgmap v346: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:57 vm02 bash[17457]: cluster 2026-03-08T23:25:56.254049+0000 mgr.x (mgr.14150) 428 : cluster [DBG] pgmap v346: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:58.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:57 vm02 bash[17457]: cluster 2026-03-08T23:25:56.254049+0000 mgr.x (mgr.14150) 428 : cluster [DBG] pgmap v346: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:58.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:57 vm10 bash[20034]: cluster 2026-03-08T23:25:56.254049+0000 mgr.x (mgr.14150) 428 : cluster [DBG] pgmap v346: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:25:58.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:57 vm10 bash[20034]: cluster 2026-03-08T23:25:56.254049+0000 mgr.x (mgr.14150) 428 : cluster [DBG] pgmap v346: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:00.114 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:59 vm02 bash[17457]: cluster 2026-03-08T23:25:58.254317+0000 mgr.x (mgr.14150) 429 : cluster [DBG] pgmap v347: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:00.114 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:25:59 vm02 bash[17457]: cluster 2026-03-08T23:25:58.254317+0000 mgr.x (mgr.14150) 429 : cluster [DBG] pgmap v347: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:00.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:59 vm04 bash[19918]: cluster 2026-03-08T23:25:58.254317+0000 mgr.x (mgr.14150) 429 : cluster [DBG] pgmap v347: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:00.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:25:59 vm04 bash[19918]: cluster 2026-03-08T23:25:58.254317+0000 mgr.x (mgr.14150) 429 : cluster [DBG] pgmap v347: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:00.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:59 vm10 bash[20034]: cluster 2026-03-08T23:25:58.254317+0000 mgr.x (mgr.14150) 429 : cluster [DBG] pgmap v347: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:00.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:25:59 vm10 bash[20034]: cluster 2026-03-08T23:25:58.254317+0000 mgr.x (mgr.14150) 429 : cluster [DBG] pgmap v347: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:00.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:00 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:26:01.063 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:00 vm10 bash[20034]: audit 2026-03-08T23:26:00.114250+0000 mgr.x (mgr.14150) 430 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:01.064 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:00 vm10 bash[20034]: audit 2026-03-08T23:26:00.114250+0000 mgr.x (mgr.14150) 430 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:01.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:00 vm04 bash[19918]: audit 2026-03-08T23:26:00.114250+0000 mgr.x (mgr.14150) 430 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:01.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:00 vm04 bash[19918]: audit 2026-03-08T23:26:00.114250+0000 mgr.x (mgr.14150) 430 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:00 vm02 bash[17457]: audit 2026-03-08T23:26:00.114250+0000 mgr.x (mgr.14150) 430 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:01.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:00 vm02 bash[17457]: audit 2026-03-08T23:26:00.114250+0000 mgr.x (mgr.14150) 430 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:01.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:01 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:26:02.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:01 vm04 bash[19918]: cluster 2026-03-08T23:26:00.254596+0000 mgr.x (mgr.14150) 431 : cluster [DBG] pgmap v348: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:02.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:01 vm04 bash[19918]: cluster 2026-03-08T23:26:00.254596+0000 mgr.x (mgr.14150) 431 : cluster [DBG] pgmap v348: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:02.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:01 vm04 bash[19918]: audit 2026-03-08T23:26:01.063739+0000 mgr.x (mgr.14150) 432 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:02.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:01 vm04 bash[19918]: audit 2026-03-08T23:26:01.063739+0000 mgr.x (mgr.14150) 432 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:01 vm02 bash[17457]: cluster 2026-03-08T23:26:00.254596+0000 mgr.x (mgr.14150) 431 : cluster [DBG] pgmap v348: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:01 vm02 bash[17457]: cluster 2026-03-08T23:26:00.254596+0000 mgr.x (mgr.14150) 431 : cluster [DBG] pgmap v348: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:01 vm02 bash[17457]: audit 2026-03-08T23:26:01.063739+0000 mgr.x (mgr.14150) 432 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:02.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:01 vm02 bash[17457]: audit 2026-03-08T23:26:01.063739+0000 mgr.x (mgr.14150) 432 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:02.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:01 vm10 bash[20034]: cluster 2026-03-08T23:26:00.254596+0000 mgr.x (mgr.14150) 431 : cluster [DBG] pgmap v348: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:02.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:01 vm10 bash[20034]: cluster 2026-03-08T23:26:00.254596+0000 mgr.x (mgr.14150) 431 : cluster [DBG] pgmap v348: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:02.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:01 vm10 bash[20034]: audit 2026-03-08T23:26:01.063739+0000 mgr.x (mgr.14150) 432 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:02.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:01 vm10 bash[20034]: audit 2026-03-08T23:26:01.063739+0000 mgr.x (mgr.14150) 432 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:04.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:03 vm04 bash[19918]: cluster 2026-03-08T23:26:02.254868+0000 mgr.x (mgr.14150) 433 : cluster [DBG] pgmap v349: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:04.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:03 vm04 bash[19918]: cluster 2026-03-08T23:26:02.254868+0000 mgr.x (mgr.14150) 433 : cluster [DBG] pgmap v349: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:04.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:03 vm02 bash[17457]: cluster 2026-03-08T23:26:02.254868+0000 mgr.x (mgr.14150) 433 : cluster [DBG] pgmap v349: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:04.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:03 vm02 bash[17457]: cluster 2026-03-08T23:26:02.254868+0000 mgr.x (mgr.14150) 433 : cluster [DBG] pgmap v349: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:04.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:03 vm10 bash[20034]: cluster 2026-03-08T23:26:02.254868+0000 mgr.x (mgr.14150) 433 : cluster [DBG] pgmap v349: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:04.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:03 vm10 bash[20034]: cluster 2026-03-08T23:26:02.254868+0000 mgr.x (mgr.14150) 433 : cluster [DBG] pgmap v349: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:06.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:05 vm02 bash[17457]: cluster 2026-03-08T23:26:04.255121+0000 mgr.x (mgr.14150) 434 : cluster [DBG] pgmap v350: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:06.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:05 vm02 bash[17457]: cluster 2026-03-08T23:26:04.255121+0000 mgr.x (mgr.14150) 434 : cluster [DBG] pgmap v350: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:06.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:05 vm10 bash[20034]: cluster 2026-03-08T23:26:04.255121+0000 mgr.x (mgr.14150) 434 : cluster [DBG] pgmap v350: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:06.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:05 vm10 bash[20034]: cluster 2026-03-08T23:26:04.255121+0000 mgr.x (mgr.14150) 434 : cluster [DBG] pgmap v350: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:06.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:05 vm04 bash[19918]: cluster 2026-03-08T23:26:04.255121+0000 mgr.x (mgr.14150) 434 : cluster [DBG] pgmap v350: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:06.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:05 vm04 bash[19918]: cluster 2026-03-08T23:26:04.255121+0000 mgr.x (mgr.14150) 434 : cluster [DBG] pgmap v350: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:08.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:07 vm02 bash[17457]: cluster 2026-03-08T23:26:06.255413+0000 mgr.x (mgr.14150) 435 : cluster [DBG] pgmap v351: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:08.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:07 vm02 bash[17457]: cluster 2026-03-08T23:26:06.255413+0000 mgr.x (mgr.14150) 435 : cluster [DBG] pgmap v351: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:08.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:07 vm10 bash[20034]: cluster 2026-03-08T23:26:06.255413+0000 mgr.x (mgr.14150) 435 : cluster [DBG] pgmap v351: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:08.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:07 vm10 bash[20034]: cluster 2026-03-08T23:26:06.255413+0000 mgr.x (mgr.14150) 435 : cluster [DBG] pgmap v351: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:08.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:07 vm04 bash[19918]: cluster 2026-03-08T23:26:06.255413+0000 mgr.x (mgr.14150) 435 : cluster [DBG] pgmap v351: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:08.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:07 vm04 bash[19918]: cluster 2026-03-08T23:26:06.255413+0000 mgr.x (mgr.14150) 435 : cluster [DBG] pgmap v351: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:10.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:10 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:26:10.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:10 vm02 bash[17457]: cluster 2026-03-08T23:26:08.255662+0000 mgr.x (mgr.14150) 436 : cluster [DBG] pgmap v352: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:10.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:10 vm02 bash[17457]: cluster 2026-03-08T23:26:08.255662+0000 mgr.x (mgr.14150) 436 : cluster [DBG] pgmap v352: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:10.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:10 vm04 bash[19918]: cluster 2026-03-08T23:26:08.255662+0000 mgr.x (mgr.14150) 436 : cluster [DBG] pgmap v352: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:10.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:10 vm04 bash[19918]: cluster 2026-03-08T23:26:08.255662+0000 mgr.x (mgr.14150) 436 : cluster [DBG] pgmap v352: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:10.656 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:10 vm10 bash[20034]: cluster 2026-03-08T23:26:08.255662+0000 mgr.x (mgr.14150) 436 : cluster [DBG] pgmap v352: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:10.656 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:10 vm10 bash[20034]: cluster 2026-03-08T23:26:08.255662+0000 mgr.x (mgr.14150) 436 : cluster [DBG] pgmap v352: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:11 vm10 bash[20034]: audit 2026-03-08T23:26:10.123772+0000 mgr.x (mgr.14150) 437 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:11 vm10 bash[20034]: audit 2026-03-08T23:26:10.123772+0000 mgr.x (mgr.14150) 437 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:11 vm10 bash[20034]: cluster 2026-03-08T23:26:10.255894+0000 mgr.x (mgr.14150) 438 : cluster [DBG] pgmap v353: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:11 vm10 bash[20034]: cluster 2026-03-08T23:26:10.255894+0000 mgr.x (mgr.14150) 438 : cluster [DBG] pgmap v353: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:11 vm10 bash[20034]: audit 2026-03-08T23:26:11.070335+0000 mgr.x (mgr.14150) 439 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:11.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:11 vm10 bash[20034]: audit 2026-03-08T23:26:11.070335+0000 mgr.x (mgr.14150) 439 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:11.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:11 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:26:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:11 vm04 bash[19918]: audit 2026-03-08T23:26:10.123772+0000 mgr.x (mgr.14150) 437 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:11 vm04 bash[19918]: audit 2026-03-08T23:26:10.123772+0000 mgr.x (mgr.14150) 437 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:11 vm04 bash[19918]: cluster 2026-03-08T23:26:10.255894+0000 mgr.x (mgr.14150) 438 : cluster [DBG] pgmap v353: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:11 vm04 bash[19918]: cluster 2026-03-08T23:26:10.255894+0000 mgr.x (mgr.14150) 438 : cluster [DBG] pgmap v353: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:11 vm04 bash[19918]: audit 2026-03-08T23:26:11.070335+0000 mgr.x (mgr.14150) 439 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:11.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:11 vm04 bash[19918]: audit 2026-03-08T23:26:11.070335+0000 mgr.x (mgr.14150) 439 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:11 vm02 bash[17457]: audit 2026-03-08T23:26:10.123772+0000 mgr.x (mgr.14150) 437 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:11 vm02 bash[17457]: audit 2026-03-08T23:26:10.123772+0000 mgr.x (mgr.14150) 437 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:11 vm02 bash[17457]: cluster 2026-03-08T23:26:10.255894+0000 mgr.x (mgr.14150) 438 : cluster [DBG] pgmap v353: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:11 vm02 bash[17457]: cluster 2026-03-08T23:26:10.255894+0000 mgr.x (mgr.14150) 438 : cluster [DBG] pgmap v353: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:11 vm02 bash[17457]: audit 2026-03-08T23:26:11.070335+0000 mgr.x (mgr.14150) 439 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:11.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:11 vm02 bash[17457]: audit 2026-03-08T23:26:11.070335+0000 mgr.x (mgr.14150) 439 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:13.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:13 vm04 bash[19918]: cluster 2026-03-08T23:26:12.256100+0000 mgr.x (mgr.14150) 440 : cluster [DBG] pgmap v354: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:13.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:13 vm04 bash[19918]: cluster 2026-03-08T23:26:12.256100+0000 mgr.x (mgr.14150) 440 : cluster [DBG] pgmap v354: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:13.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:13 vm02 bash[17457]: cluster 2026-03-08T23:26:12.256100+0000 mgr.x (mgr.14150) 440 : cluster [DBG] pgmap v354: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:13.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:13 vm02 bash[17457]: cluster 2026-03-08T23:26:12.256100+0000 mgr.x (mgr.14150) 440 : cluster [DBG] pgmap v354: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:13.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:13 vm10 bash[20034]: cluster 2026-03-08T23:26:12.256100+0000 mgr.x (mgr.14150) 440 : cluster [DBG] pgmap v354: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:13.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:13 vm10 bash[20034]: cluster 2026-03-08T23:26:12.256100+0000 mgr.x (mgr.14150) 440 : cluster [DBG] pgmap v354: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:15.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:15 vm04 bash[19918]: cluster 2026-03-08T23:26:14.256299+0000 mgr.x (mgr.14150) 441 : cluster [DBG] pgmap v355: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:15.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:15 vm04 bash[19918]: cluster 2026-03-08T23:26:14.256299+0000 mgr.x (mgr.14150) 441 : cluster [DBG] pgmap v355: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:15.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:15 vm02 bash[17457]: cluster 2026-03-08T23:26:14.256299+0000 mgr.x (mgr.14150) 441 : cluster [DBG] pgmap v355: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:15.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:15 vm02 bash[17457]: cluster 2026-03-08T23:26:14.256299+0000 mgr.x (mgr.14150) 441 : cluster [DBG] pgmap v355: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:15.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:15 vm10 bash[20034]: cluster 2026-03-08T23:26:14.256299+0000 mgr.x (mgr.14150) 441 : cluster [DBG] pgmap v355: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:15.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:15 vm10 bash[20034]: cluster 2026-03-08T23:26:14.256299+0000 mgr.x (mgr.14150) 441 : cluster [DBG] pgmap v355: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:17.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:17 vm04 bash[19918]: cluster 2026-03-08T23:26:16.256558+0000 mgr.x (mgr.14150) 442 : cluster [DBG] pgmap v356: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:17.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:17 vm04 bash[19918]: cluster 2026-03-08T23:26:16.256558+0000 mgr.x (mgr.14150) 442 : cluster [DBG] pgmap v356: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:17.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:17 vm04 bash[19918]: audit 2026-03-08T23:26:17.158262+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:26:17.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:17 vm04 bash[19918]: audit 2026-03-08T23:26:17.158262+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:26:17.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:17 vm02 bash[17457]: cluster 2026-03-08T23:26:16.256558+0000 mgr.x (mgr.14150) 442 : cluster [DBG] pgmap v356: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:17.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:17 vm02 bash[17457]: cluster 2026-03-08T23:26:16.256558+0000 mgr.x (mgr.14150) 442 : cluster [DBG] pgmap v356: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:17.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:17 vm02 bash[17457]: audit 2026-03-08T23:26:17.158262+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:26:17.644 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:17 vm02 bash[17457]: audit 2026-03-08T23:26:17.158262+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:26:17.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:17 vm10 bash[20034]: cluster 2026-03-08T23:26:16.256558+0000 mgr.x (mgr.14150) 442 : cluster [DBG] pgmap v356: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:17.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:17 vm10 bash[20034]: cluster 2026-03-08T23:26:16.256558+0000 mgr.x (mgr.14150) 442 : cluster [DBG] pgmap v356: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:17.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:17 vm10 bash[20034]: audit 2026-03-08T23:26:17.158262+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:26:17.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:17 vm10 bash[20034]: audit 2026-03-08T23:26:17.158262+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:26:18.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:18 vm04 bash[19918]: audit 2026-03-08T23:26:17.496067+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:18 vm04 bash[19918]: audit 2026-03-08T23:26:17.496067+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:18 vm04 bash[19918]: audit 2026-03-08T23:26:17.502138+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:18 vm04 bash[19918]: audit 2026-03-08T23:26:17.502138+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:18 vm04 bash[19918]: audit 2026-03-08T23:26:17.811858+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:26:18.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:18 vm04 bash[19918]: audit 2026-03-08T23:26:17.811858+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:26:18.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:18 vm04 bash[19918]: audit 2026-03-08T23:26:17.812394+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:26:18.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:18 vm04 bash[19918]: audit 2026-03-08T23:26:17.812394+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:26:18.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:18 vm04 bash[19918]: audit 2026-03-08T23:26:17.817279+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:18 vm04 bash[19918]: audit 2026-03-08T23:26:17.817279+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:18 vm02 bash[17457]: audit 2026-03-08T23:26:17.496067+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:18 vm02 bash[17457]: audit 2026-03-08T23:26:17.496067+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:18 vm02 bash[17457]: audit 2026-03-08T23:26:17.502138+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:18 vm02 bash[17457]: audit 2026-03-08T23:26:17.502138+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:18 vm02 bash[17457]: audit 2026-03-08T23:26:17.811858+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:26:18.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:18 vm02 bash[17457]: audit 2026-03-08T23:26:17.811858+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:26:18.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:18 vm02 bash[17457]: audit 2026-03-08T23:26:17.812394+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:26:18.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:18 vm02 bash[17457]: audit 2026-03-08T23:26:17.812394+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:26:18.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:18 vm02 bash[17457]: audit 2026-03-08T23:26:17.817279+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:18 vm02 bash[17457]: audit 2026-03-08T23:26:17.817279+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:18 vm10 bash[20034]: audit 2026-03-08T23:26:17.496067+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:18 vm10 bash[20034]: audit 2026-03-08T23:26:17.496067+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:18 vm10 bash[20034]: audit 2026-03-08T23:26:17.502138+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:18 vm10 bash[20034]: audit 2026-03-08T23:26:17.502138+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:18 vm10 bash[20034]: audit 2026-03-08T23:26:17.811858+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:26:18.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:18 vm10 bash[20034]: audit 2026-03-08T23:26:17.811858+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:26:18.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:18 vm10 bash[20034]: audit 2026-03-08T23:26:17.812394+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:26:18.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:18 vm10 bash[20034]: audit 2026-03-08T23:26:17.812394+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:26:18.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:18 vm10 bash[20034]: audit 2026-03-08T23:26:17.817279+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:18.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:18 vm10 bash[20034]: audit 2026-03-08T23:26:17.817279+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14150 192.168.123.102:0/1721940610' entity='mgr.x' 2026-03-08T23:26:20.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:20 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:26:20.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:20 vm04 bash[19918]: cluster 2026-03-08T23:26:18.256812+0000 mgr.x (mgr.14150) 443 : cluster [DBG] pgmap v357: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:20.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:20 vm04 bash[19918]: cluster 2026-03-08T23:26:18.256812+0000 mgr.x (mgr.14150) 443 : cluster [DBG] pgmap v357: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:20.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:20 vm02 bash[17457]: cluster 2026-03-08T23:26:18.256812+0000 mgr.x (mgr.14150) 443 : cluster [DBG] pgmap v357: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:20.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:20 vm02 bash[17457]: cluster 2026-03-08T23:26:18.256812+0000 mgr.x (mgr.14150) 443 : cluster [DBG] pgmap v357: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:20.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:20 vm10 bash[20034]: cluster 2026-03-08T23:26:18.256812+0000 mgr.x (mgr.14150) 443 : cluster [DBG] pgmap v357: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:20.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:20 vm10 bash[20034]: cluster 2026-03-08T23:26:18.256812+0000 mgr.x (mgr.14150) 443 : cluster [DBG] pgmap v357: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:21.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:21 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:26:21.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:21 vm04 bash[19918]: audit 2026-03-08T23:26:20.125083+0000 mgr.x (mgr.14150) 444 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:21.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:21 vm04 bash[19918]: audit 2026-03-08T23:26:20.125083+0000 mgr.x (mgr.14150) 444 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:21.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:21 vm04 bash[19918]: cluster 2026-03-08T23:26:20.257099+0000 mgr.x (mgr.14150) 445 : cluster [DBG] pgmap v358: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:21.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:21 vm04 bash[19918]: cluster 2026-03-08T23:26:20.257099+0000 mgr.x (mgr.14150) 445 : cluster [DBG] pgmap v358: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:21.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:21 vm04 bash[19918]: audit 2026-03-08T23:26:21.078447+0000 mgr.x (mgr.14150) 446 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:21.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:21 vm04 bash[19918]: audit 2026-03-08T23:26:21.078447+0000 mgr.x (mgr.14150) 446 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:21.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:21 vm02 bash[17457]: audit 2026-03-08T23:26:20.125083+0000 mgr.x (mgr.14150) 444 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:21.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:21 vm02 bash[17457]: audit 2026-03-08T23:26:20.125083+0000 mgr.x (mgr.14150) 444 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:21.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:21 vm02 bash[17457]: cluster 2026-03-08T23:26:20.257099+0000 mgr.x (mgr.14150) 445 : cluster [DBG] pgmap v358: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:21.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:21 vm02 bash[17457]: cluster 2026-03-08T23:26:20.257099+0000 mgr.x (mgr.14150) 445 : cluster [DBG] pgmap v358: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:21.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:21 vm02 bash[17457]: audit 2026-03-08T23:26:21.078447+0000 mgr.x (mgr.14150) 446 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:21.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:21 vm02 bash[17457]: audit 2026-03-08T23:26:21.078447+0000 mgr.x (mgr.14150) 446 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:21.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:21 vm10 bash[20034]: audit 2026-03-08T23:26:20.125083+0000 mgr.x (mgr.14150) 444 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:21.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:21 vm10 bash[20034]: audit 2026-03-08T23:26:20.125083+0000 mgr.x (mgr.14150) 444 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:21.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:21 vm10 bash[20034]: cluster 2026-03-08T23:26:20.257099+0000 mgr.x (mgr.14150) 445 : cluster [DBG] pgmap v358: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:21.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:21 vm10 bash[20034]: cluster 2026-03-08T23:26:20.257099+0000 mgr.x (mgr.14150) 445 : cluster [DBG] pgmap v358: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:21.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:21 vm10 bash[20034]: audit 2026-03-08T23:26:21.078447+0000 mgr.x (mgr.14150) 446 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:21.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:21 vm10 bash[20034]: audit 2026-03-08T23:26:21.078447+0000 mgr.x (mgr.14150) 446 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:23.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:23 vm04 bash[19918]: cluster 2026-03-08T23:26:22.257412+0000 mgr.x (mgr.14150) 447 : cluster [DBG] pgmap v359: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:23.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:23 vm04 bash[19918]: cluster 2026-03-08T23:26:22.257412+0000 mgr.x (mgr.14150) 447 : cluster [DBG] pgmap v359: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:23 vm02 bash[17457]: cluster 2026-03-08T23:26:22.257412+0000 mgr.x (mgr.14150) 447 : cluster [DBG] pgmap v359: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:23.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:23 vm02 bash[17457]: cluster 2026-03-08T23:26:22.257412+0000 mgr.x (mgr.14150) 447 : cluster [DBG] pgmap v359: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:23.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:23 vm10 bash[20034]: cluster 2026-03-08T23:26:22.257412+0000 mgr.x (mgr.14150) 447 : cluster [DBG] pgmap v359: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:23.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:23 vm10 bash[20034]: cluster 2026-03-08T23:26:22.257412+0000 mgr.x (mgr.14150) 447 : cluster [DBG] pgmap v359: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:25.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:25 vm04 bash[19918]: cluster 2026-03-08T23:26:24.257689+0000 mgr.x (mgr.14150) 448 : cluster [DBG] pgmap v360: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:25.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:25 vm04 bash[19918]: cluster 2026-03-08T23:26:24.257689+0000 mgr.x (mgr.14150) 448 : cluster [DBG] pgmap v360: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:25.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:25 vm02 bash[17457]: cluster 2026-03-08T23:26:24.257689+0000 mgr.x (mgr.14150) 448 : cluster [DBG] pgmap v360: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:25.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:25 vm02 bash[17457]: cluster 2026-03-08T23:26:24.257689+0000 mgr.x (mgr.14150) 448 : cluster [DBG] pgmap v360: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:25.906 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:25 vm10 bash[20034]: cluster 2026-03-08T23:26:24.257689+0000 mgr.x (mgr.14150) 448 : cluster [DBG] pgmap v360: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:25.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:25 vm10 bash[20034]: cluster 2026-03-08T23:26:24.257689+0000 mgr.x (mgr.14150) 448 : cluster [DBG] pgmap v360: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:27.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:27 vm04 bash[19918]: cluster 2026-03-08T23:26:26.257994+0000 mgr.x (mgr.14150) 449 : cluster [DBG] pgmap v361: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:27.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:27 vm04 bash[19918]: cluster 2026-03-08T23:26:26.257994+0000 mgr.x (mgr.14150) 449 : cluster [DBG] pgmap v361: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:27.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:27 vm02 bash[17457]: cluster 2026-03-08T23:26:26.257994+0000 mgr.x (mgr.14150) 449 : cluster [DBG] pgmap v361: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:27.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:27 vm02 bash[17457]: cluster 2026-03-08T23:26:26.257994+0000 mgr.x (mgr.14150) 449 : cluster [DBG] pgmap v361: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:27.906 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:27 vm10 bash[20034]: cluster 2026-03-08T23:26:26.257994+0000 mgr.x (mgr.14150) 449 : cluster [DBG] pgmap v361: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:27.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:27 vm10 bash[20034]: cluster 2026-03-08T23:26:26.257994+0000 mgr.x (mgr.14150) 449 : cluster [DBG] pgmap v361: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:29.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:29 vm04 bash[19918]: cluster 2026-03-08T23:26:28.258299+0000 mgr.x (mgr.14150) 450 : cluster [DBG] pgmap v362: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:29.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:29 vm04 bash[19918]: cluster 2026-03-08T23:26:28.258299+0000 mgr.x (mgr.14150) 450 : cluster [DBG] pgmap v362: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:29 vm02 bash[17457]: cluster 2026-03-08T23:26:28.258299+0000 mgr.x (mgr.14150) 450 : cluster [DBG] pgmap v362: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:29.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:29 vm02 bash[17457]: cluster 2026-03-08T23:26:28.258299+0000 mgr.x (mgr.14150) 450 : cluster [DBG] pgmap v362: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:29 vm10 bash[20034]: cluster 2026-03-08T23:26:28.258299+0000 mgr.x (mgr.14150) 450 : cluster [DBG] pgmap v362: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:29.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:29 vm10 bash[20034]: cluster 2026-03-08T23:26:28.258299+0000 mgr.x (mgr.14150) 450 : cluster [DBG] pgmap v362: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:30.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:30 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:26:30.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:30 vm10 bash[20034]: audit 2026-03-08T23:26:30.125808+0000 mgr.x (mgr.14150) 451 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:30.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:30 vm10 bash[20034]: audit 2026-03-08T23:26:30.125808+0000 mgr.x (mgr.14150) 451 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:31.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:30 vm04 bash[19918]: audit 2026-03-08T23:26:30.125808+0000 mgr.x (mgr.14150) 451 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:31.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:30 vm04 bash[19918]: audit 2026-03-08T23:26:30.125808+0000 mgr.x (mgr.14150) 451 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:31.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:30 vm02 bash[17457]: audit 2026-03-08T23:26:30.125808+0000 mgr.x (mgr.14150) 451 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:31.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:30 vm02 bash[17457]: audit 2026-03-08T23:26:30.125808+0000 mgr.x (mgr.14150) 451 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr: 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr:state without impacting any branches by switching back to a branch. 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr: 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr: 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr: git switch -c 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr: 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr:Or undo this operation with: 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr: 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr: git switch - 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr: 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr: 2026-03-08T23:26:31.189 INFO:teuthology.orchestra.run.vm10.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-08T23:26:31.195 DEBUG:teuthology.orchestra.run.vm10:> cp -- /home/ubuntu/cephtest/clone.client.2/src/test/cli-integration/rbd/gwcli_delete.t /home/ubuntu/cephtest/archive/cram.client.2 2026-03-08T23:26:31.242 INFO:tasks.cram:Running tests for client.0... 2026-03-08T23:26:31.242 DEBUG:teuthology.orchestra.run.vm02:> CEPH_REF=master CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram -v -- /home/ubuntu/cephtest/archive/cram.client.0/*.t 2026-03-08T23:26:31.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:31 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:26:32.107 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:31 vm02 bash[17457]: cluster 2026-03-08T23:26:30.258555+0000 mgr.x (mgr.14150) 452 : cluster [DBG] pgmap v363: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:32.107 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:31 vm02 bash[17457]: cluster 2026-03-08T23:26:30.258555+0000 mgr.x (mgr.14150) 452 : cluster [DBG] pgmap v363: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:32.107 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:31 vm02 bash[17457]: audit 2026-03-08T23:26:31.086478+0000 mgr.x (mgr.14150) 453 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:32.107 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:31 vm02 bash[17457]: audit 2026-03-08T23:26:31.086478+0000 mgr.x (mgr.14150) 453 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:32.107 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:31 vm02 bash[17457]: audit 2026-03-08T23:26:31.618269+0000 mon.a (mon.0) 759 : audit [DBG] from='client.? 192.168.123.102:0/380539169' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:32.107 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:31 vm02 bash[17457]: audit 2026-03-08T23:26:31.618269+0000 mon.a (mon.0) 759 : audit [DBG] from='client.? 192.168.123.102:0/380539169' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:32.107 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:31 vm02 bash[17457]: audit 2026-03-08T23:26:31.637058+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.102:0/664190324' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:32.107 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:31 vm02 bash[17457]: audit 2026-03-08T23:26:31.637058+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.102:0/664190324' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:32.107 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:31 vm02 bash[17457]: audit 2026-03-08T23:26:31.644728+0000 mon.a (mon.0) 760 : audit [DBG] from='client.? 192.168.123.102:0/2017341893' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:32.107 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:31 vm02 bash[17457]: audit 2026-03-08T23:26:31.644728+0000 mon.a (mon.0) 760 : audit [DBG] from='client.? 192.168.123.102:0/2017341893' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:32.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:31 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:31] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:32.108 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:31 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:31] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:31 vm04 bash[19918]: cluster 2026-03-08T23:26:30.258555+0000 mgr.x (mgr.14150) 452 : cluster [DBG] pgmap v363: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:31 vm04 bash[19918]: cluster 2026-03-08T23:26:30.258555+0000 mgr.x (mgr.14150) 452 : cluster [DBG] pgmap v363: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:31 vm04 bash[19918]: audit 2026-03-08T23:26:31.086478+0000 mgr.x (mgr.14150) 453 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:31 vm04 bash[19918]: audit 2026-03-08T23:26:31.086478+0000 mgr.x (mgr.14150) 453 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:31 vm04 bash[19918]: audit 2026-03-08T23:26:31.618269+0000 mon.a (mon.0) 759 : audit [DBG] from='client.? 192.168.123.102:0/380539169' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:31 vm04 bash[19918]: audit 2026-03-08T23:26:31.618269+0000 mon.a (mon.0) 759 : audit [DBG] from='client.? 192.168.123.102:0/380539169' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:31 vm04 bash[19918]: audit 2026-03-08T23:26:31.637058+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.102:0/664190324' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:31 vm04 bash[19918]: audit 2026-03-08T23:26:31.637058+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.102:0/664190324' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:31 vm04 bash[19918]: audit 2026-03-08T23:26:31.644728+0000 mon.a (mon.0) 760 : audit [DBG] from='client.? 192.168.123.102:0/2017341893' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:32.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:31 vm04 bash[19918]: audit 2026-03-08T23:26:31.644728+0000 mon.a (mon.0) 760 : audit [DBG] from='client.? 192.168.123.102:0/2017341893' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:31 vm10 bash[20034]: cluster 2026-03-08T23:26:30.258555+0000 mgr.x (mgr.14150) 452 : cluster [DBG] pgmap v363: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:31 vm10 bash[20034]: cluster 2026-03-08T23:26:30.258555+0000 mgr.x (mgr.14150) 452 : cluster [DBG] pgmap v363: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-08T23:26:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:31 vm10 bash[20034]: audit 2026-03-08T23:26:31.086478+0000 mgr.x (mgr.14150) 453 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:31 vm10 bash[20034]: audit 2026-03-08T23:26:31.086478+0000 mgr.x (mgr.14150) 453 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:31 vm10 bash[20034]: audit 2026-03-08T23:26:31.618269+0000 mon.a (mon.0) 759 : audit [DBG] from='client.? 192.168.123.102:0/380539169' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:31 vm10 bash[20034]: audit 2026-03-08T23:26:31.618269+0000 mon.a (mon.0) 759 : audit [DBG] from='client.? 192.168.123.102:0/380539169' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:31 vm10 bash[20034]: audit 2026-03-08T23:26:31.637058+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.102:0/664190324' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:31 vm10 bash[20034]: audit 2026-03-08T23:26:31.637058+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.102:0/664190324' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:31 vm10 bash[20034]: audit 2026-03-08T23:26:31.644728+0000 mon.a (mon.0) 760 : audit [DBG] from='client.? 192.168.123.102:0/2017341893' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:32.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:31 vm10 bash[20034]: audit 2026-03-08T23:26:31.644728+0000 mon.a (mon.0) 760 : audit [DBG] from='client.? 192.168.123.102:0/2017341893' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:32.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:32] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:32.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:32] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:32.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:32] "GET /api/_ping HTTP/1.1" 200 - 2026-03-08T23:26:32.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:32] "GET /api/_ping HTTP/1.1" 200 - 2026-03-08T23:26:32.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[37757]: debug (LUN.allocate) created datapool/block0 successfully 2026-03-08T23:26:32.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[37757]: debug (LUN.add_dev_to_lio) Adding image 'datapool/block0' to LIO backstore user:rbd 2026-03-08T23:26:32.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[37757]: debug failed to add datapool/block0 to LIO - error(Could not create _Backstore in configFS) 2026-03-08T23:26:32.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[37757]: debug LUN alloc problem - failed to add datapool/block0 to LIO - error(Could not create _Backstore in configFS) 2026-03-08T23:26:32.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:32] "PUT /api/_disk/datapool/block0 HTTP/1.1" 500 - 2026-03-08T23:26:32.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:32] "PUT /api/_disk/datapool/block0 HTTP/1.1" 500 - 2026-03-08T23:26:32.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[37757]: debug _disk change on localhost failed with 500 2026-03-08T23:26:32.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:32] "PUT /api/disk/datapool/block0 HTTP/1.1" 500 - 2026-03-08T23:26:32.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:32] "PUT /api/disk/datapool/block0 HTTP/1.1" 500 - 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:31.691576+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.102:0/1555874818' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:31.691576+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.102:0/1555874818' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:31.699791+0000 mon.a (mon.0) 762 : audit [DBG] from='client.? 192.168.123.102:0/3265824633' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:31.699791+0000 mon.a (mon.0) 762 : audit [DBG] from='client.? 192.168.123.102:0/3265824633' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:32.075614+0000 mon.a (mon.0) 763 : audit [DBG] from='client.? 192.168.123.102:0/2819365359' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:32.075614+0000 mon.a (mon.0) 763 : audit [DBG] from='client.? 192.168.123.102:0/2819365359' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:32.094233+0000 mon.a (mon.0) 764 : audit [DBG] from='client.? 192.168.123.102:0/788483470' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:32.094233+0000 mon.a (mon.0) 764 : audit [DBG] from='client.? 192.168.123.102:0/788483470' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:32.102419+0000 mon.a (mon.0) 765 : audit [DBG] from='client.? 192.168.123.102:0/3940111466' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:32.102419+0000 mon.a (mon.0) 765 : audit [DBG] from='client.? 192.168.123.102:0/3940111466' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:32.146320+0000 mon.a (mon.0) 766 : audit [DBG] from='client.? 192.168.123.102:0/3168802575' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:32.146320+0000 mon.a (mon.0) 766 : audit [DBG] from='client.? 192.168.123.102:0/3168802575' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:32.154125+0000 mon.a (mon.0) 767 : audit [DBG] from='client.? 192.168.123.102:0/1151943053' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:32.154125+0000 mon.a (mon.0) 767 : audit [DBG] from='client.? 192.168.123.102:0/1151943053' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:32.616454+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.102:0/3481544551' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:32.616454+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.102:0/3481544551' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:32.635513+0000 mon.a (mon.0) 769 : audit [DBG] from='client.? 192.168.123.102:0/3289745633' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:32.635513+0000 mon.a (mon.0) 769 : audit [DBG] from='client.? 192.168.123.102:0/3289745633' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:32.643798+0000 mon.a (mon.0) 770 : audit [DBG] from='client.? 192.168.123.102:0/1267337813' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[17457]: audit 2026-03-08T23:26:32.643798+0000 mon.a (mon.0) 770 : audit [DBG] from='client.? 192.168.123.102:0/1267337813' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.105 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:32] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:33.105 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:32 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:32] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:31.691576+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.102:0/1555874818' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:31.691576+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.102:0/1555874818' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:31.699791+0000 mon.a (mon.0) 762 : audit [DBG] from='client.? 192.168.123.102:0/3265824633' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:31.699791+0000 mon.a (mon.0) 762 : audit [DBG] from='client.? 192.168.123.102:0/3265824633' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:32.075614+0000 mon.a (mon.0) 763 : audit [DBG] from='client.? 192.168.123.102:0/2819365359' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:32.075614+0000 mon.a (mon.0) 763 : audit [DBG] from='client.? 192.168.123.102:0/2819365359' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:32.094233+0000 mon.a (mon.0) 764 : audit [DBG] from='client.? 192.168.123.102:0/788483470' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:32.094233+0000 mon.a (mon.0) 764 : audit [DBG] from='client.? 192.168.123.102:0/788483470' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:32.102419+0000 mon.a (mon.0) 765 : audit [DBG] from='client.? 192.168.123.102:0/3940111466' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:32.102419+0000 mon.a (mon.0) 765 : audit [DBG] from='client.? 192.168.123.102:0/3940111466' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:32.146320+0000 mon.a (mon.0) 766 : audit [DBG] from='client.? 192.168.123.102:0/3168802575' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:32.146320+0000 mon.a (mon.0) 766 : audit [DBG] from='client.? 192.168.123.102:0/3168802575' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:32.154125+0000 mon.a (mon.0) 767 : audit [DBG] from='client.? 192.168.123.102:0/1151943053' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:32.154125+0000 mon.a (mon.0) 767 : audit [DBG] from='client.? 192.168.123.102:0/1151943053' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:32.616454+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.102:0/3481544551' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:32.616454+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.102:0/3481544551' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:32.635513+0000 mon.a (mon.0) 769 : audit [DBG] from='client.? 192.168.123.102:0/3289745633' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:32.635513+0000 mon.a (mon.0) 769 : audit [DBG] from='client.? 192.168.123.102:0/3289745633' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:32.643798+0000 mon.a (mon.0) 770 : audit [DBG] from='client.? 192.168.123.102:0/1267337813' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:32 vm04 bash[19918]: audit 2026-03-08T23:26:32.643798+0000 mon.a (mon.0) 770 : audit [DBG] from='client.? 192.168.123.102:0/1267337813' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:31.691576+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.102:0/1555874818' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:31.691576+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.102:0/1555874818' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:31.699791+0000 mon.a (mon.0) 762 : audit [DBG] from='client.? 192.168.123.102:0/3265824633' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:31.699791+0000 mon.a (mon.0) 762 : audit [DBG] from='client.? 192.168.123.102:0/3265824633' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:32.075614+0000 mon.a (mon.0) 763 : audit [DBG] from='client.? 192.168.123.102:0/2819365359' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:32.075614+0000 mon.a (mon.0) 763 : audit [DBG] from='client.? 192.168.123.102:0/2819365359' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:32.094233+0000 mon.a (mon.0) 764 : audit [DBG] from='client.? 192.168.123.102:0/788483470' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:32.094233+0000 mon.a (mon.0) 764 : audit [DBG] from='client.? 192.168.123.102:0/788483470' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:32.102419+0000 mon.a (mon.0) 765 : audit [DBG] from='client.? 192.168.123.102:0/3940111466' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:32.102419+0000 mon.a (mon.0) 765 : audit [DBG] from='client.? 192.168.123.102:0/3940111466' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:32.146320+0000 mon.a (mon.0) 766 : audit [DBG] from='client.? 192.168.123.102:0/3168802575' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:32.146320+0000 mon.a (mon.0) 766 : audit [DBG] from='client.? 192.168.123.102:0/3168802575' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:32.154125+0000 mon.a (mon.0) 767 : audit [DBG] from='client.? 192.168.123.102:0/1151943053' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:32.154125+0000 mon.a (mon.0) 767 : audit [DBG] from='client.? 192.168.123.102:0/1151943053' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:32.616454+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.102:0/3481544551' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:32.616454+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.102:0/3481544551' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:32.635513+0000 mon.a (mon.0) 769 : audit [DBG] from='client.? 192.168.123.102:0/3289745633' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:32.635513+0000 mon.a (mon.0) 769 : audit [DBG] from='client.? 192.168.123.102:0/3289745633' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:32.643798+0000 mon.a (mon.0) 770 : audit [DBG] from='client.? 192.168.123.102:0/1267337813' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:32 vm10 bash[20034]: audit 2026-03-08T23:26:32.643798+0000 mon.a (mon.0) 770 : audit [DBG] from='client.? 192.168.123.102:0/1267337813' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:33] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:33.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:33] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: cluster 2026-03-08T23:26:32.258847+0000 mgr.x (mgr.14150) 454 : cluster [DBG] pgmap v364: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: cluster 2026-03-08T23:26:32.258847+0000 mgr.x (mgr.14150) 454 : cluster [DBG] pgmap v364: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:32.695291+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.102:0/3124044775' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:32.695291+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.102:0/3124044775' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:32.703688+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.102:0/538397720' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:32.703688+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.102:0/538397720' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.068928+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.102:0/2609479918' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:33.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.068928+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.102:0/2609479918' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.090017+0000 mon.c (mon.1) 28 : audit [DBG] from='client.? 192.168.123.102:0/835177346' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.090017+0000 mon.c (mon.1) 28 : audit [DBG] from='client.? 192.168.123.102:0/835177346' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.099273+0000 mon.a (mon.0) 773 : audit [DBG] from='client.? 192.168.123.102:0/3611843326' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.099273+0000 mon.a (mon.0) 773 : audit [DBG] from='client.? 192.168.123.102:0/3611843326' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.142192+0000 mon.b (mon.2) 22 : audit [DBG] from='client.? 192.168.123.102:0/1805851320' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.142192+0000 mon.b (mon.2) 22 : audit [DBG] from='client.? 192.168.123.102:0/1805851320' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.150224+0000 mon.a (mon.0) 774 : audit [DBG] from='client.? 192.168.123.102:0/266274239' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.150224+0000 mon.a (mon.0) 774 : audit [DBG] from='client.? 192.168.123.102:0/266274239' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.509468+0000 mon.c (mon.1) 29 : audit [DBG] from='client.? 192.168.123.102:0/1637132033' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.509468+0000 mon.c (mon.1) 29 : audit [DBG] from='client.? 192.168.123.102:0/1637132033' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.531579+0000 mon.a (mon.0) 775 : audit [DBG] from='client.? 192.168.123.102:0/132799917' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.531579+0000 mon.a (mon.0) 775 : audit [DBG] from='client.? 192.168.123.102:0/132799917' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.540527+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.102:0/2718057363' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.540527+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.102:0/2718057363' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.583471+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.102:0/1988863721' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.583471+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.102:0/1988863721' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.592340+0000 mon.a (mon.0) 777 : audit [DBG] from='client.? 192.168.123.102:0/813519233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[17457]: audit 2026-03-08T23:26:33.592340+0000 mon.a (mon.0) 777 : audit [DBG] from='client.? 192.168.123.102:0/813519233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:33.895 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:33] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:33.895 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:33] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: cluster 2026-03-08T23:26:32.258847+0000 mgr.x (mgr.14150) 454 : cluster [DBG] pgmap v364: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: cluster 2026-03-08T23:26:32.258847+0000 mgr.x (mgr.14150) 454 : cluster [DBG] pgmap v364: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:32.695291+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.102:0/3124044775' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:32.695291+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.102:0/3124044775' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:32.703688+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.102:0/538397720' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:32.703688+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.102:0/538397720' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.068928+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.102:0/2609479918' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.068928+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.102:0/2609479918' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.090017+0000 mon.c (mon.1) 28 : audit [DBG] from='client.? 192.168.123.102:0/835177346' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.090017+0000 mon.c (mon.1) 28 : audit [DBG] from='client.? 192.168.123.102:0/835177346' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.099273+0000 mon.a (mon.0) 773 : audit [DBG] from='client.? 192.168.123.102:0/3611843326' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.099273+0000 mon.a (mon.0) 773 : audit [DBG] from='client.? 192.168.123.102:0/3611843326' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.142192+0000 mon.b (mon.2) 22 : audit [DBG] from='client.? 192.168.123.102:0/1805851320' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.142192+0000 mon.b (mon.2) 22 : audit [DBG] from='client.? 192.168.123.102:0/1805851320' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.150224+0000 mon.a (mon.0) 774 : audit [DBG] from='client.? 192.168.123.102:0/266274239' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.150224+0000 mon.a (mon.0) 774 : audit [DBG] from='client.? 192.168.123.102:0/266274239' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.509468+0000 mon.c (mon.1) 29 : audit [DBG] from='client.? 192.168.123.102:0/1637132033' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:34.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.509468+0000 mon.c (mon.1) 29 : audit [DBG] from='client.? 192.168.123.102:0/1637132033' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:34.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.531579+0000 mon.a (mon.0) 775 : audit [DBG] from='client.? 192.168.123.102:0/132799917' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:34.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.531579+0000 mon.a (mon.0) 775 : audit [DBG] from='client.? 192.168.123.102:0/132799917' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:34.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.540527+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.102:0/2718057363' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.540527+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.102:0/2718057363' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.583471+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.102:0/1988863721' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.583471+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.102:0/1988863721' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.592340+0000 mon.a (mon.0) 777 : audit [DBG] from='client.? 192.168.123.102:0/813519233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:34.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:33 vm04 bash[19918]: audit 2026-03-08T23:26:33.592340+0000 mon.a (mon.0) 777 : audit [DBG] from='client.? 192.168.123.102:0/813519233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: cluster 2026-03-08T23:26:32.258847+0000 mgr.x (mgr.14150) 454 : cluster [DBG] pgmap v364: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: cluster 2026-03-08T23:26:32.258847+0000 mgr.x (mgr.14150) 454 : cluster [DBG] pgmap v364: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:32.695291+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.102:0/3124044775' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:32.695291+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.102:0/3124044775' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:32.703688+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.102:0/538397720' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:32.703688+0000 mon.a (mon.0) 772 : audit [DBG] from='client.? 192.168.123.102:0/538397720' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.068928+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.102:0/2609479918' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.068928+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.102:0/2609479918' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.090017+0000 mon.c (mon.1) 28 : audit [DBG] from='client.? 192.168.123.102:0/835177346' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.090017+0000 mon.c (mon.1) 28 : audit [DBG] from='client.? 192.168.123.102:0/835177346' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.099273+0000 mon.a (mon.0) 773 : audit [DBG] from='client.? 192.168.123.102:0/3611843326' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.099273+0000 mon.a (mon.0) 773 : audit [DBG] from='client.? 192.168.123.102:0/3611843326' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.142192+0000 mon.b (mon.2) 22 : audit [DBG] from='client.? 192.168.123.102:0/1805851320' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.142192+0000 mon.b (mon.2) 22 : audit [DBG] from='client.? 192.168.123.102:0/1805851320' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.150224+0000 mon.a (mon.0) 774 : audit [DBG] from='client.? 192.168.123.102:0/266274239' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.150224+0000 mon.a (mon.0) 774 : audit [DBG] from='client.? 192.168.123.102:0/266274239' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.509468+0000 mon.c (mon.1) 29 : audit [DBG] from='client.? 192.168.123.102:0/1637132033' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.509468+0000 mon.c (mon.1) 29 : audit [DBG] from='client.? 192.168.123.102:0/1637132033' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.531579+0000 mon.a (mon.0) 775 : audit [DBG] from='client.? 192.168.123.102:0/132799917' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.531579+0000 mon.a (mon.0) 775 : audit [DBG] from='client.? 192.168.123.102:0/132799917' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.540527+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.102:0/2718057363' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.540527+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.102:0/2718057363' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.583471+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.102:0/1988863721' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.583471+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.102:0/1988863721' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.592340+0000 mon.a (mon.0) 777 : audit [DBG] from='client.? 192.168.123.102:0/813519233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:34.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:33 vm10 bash[20034]: audit 2026-03-08T23:26:33.592340+0000 mon.a (mon.0) 777 : audit [DBG] from='client.? 192.168.123.102:0/813519233' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:34.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:33] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:34.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:33 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:33] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:34.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[37757]: debug Unable to create the Target definition - Could not create ISCSIFabricModule in configFS 2026-03-08T23:26:34.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[37757]: debug Failure during gateway 'init' processing 2026-03-08T23:26:34.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:34] "PUT /api/target/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw HTTP/1.1" 500 - 2026-03-08T23:26:34.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:34] "PUT /api/target/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw HTTP/1.1" 500 - 2026-03-08T23:26:34.694 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:34] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:34.695 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:34] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:33.945173+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.102:0/2966454438' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:33.945173+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.102:0/2966454438' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:33.963228+0000 mon.a (mon.0) 779 : audit [DBG] from='client.? 192.168.123.102:0/2905843596' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:33.963228+0000 mon.a (mon.0) 779 : audit [DBG] from='client.? 192.168.123.102:0/2905843596' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:33.972077+0000 mon.a (mon.0) 780 : audit [DBG] from='client.? 192.168.123.102:0/135499737' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:33.972077+0000 mon.a (mon.0) 780 : audit [DBG] from='client.? 192.168.123.102:0/135499737' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:34.015396+0000 mon.a (mon.0) 781 : audit [DBG] from='client.? 192.168.123.102:0/933228805' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:34.015396+0000 mon.a (mon.0) 781 : audit [DBG] from='client.? 192.168.123.102:0/933228805' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:34.023931+0000 mon.a (mon.0) 782 : audit [DBG] from='client.? 192.168.123.102:0/4242522743' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:34.023931+0000 mon.a (mon.0) 782 : audit [DBG] from='client.? 192.168.123.102:0/4242522743' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:34.402939+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.102:0/454444027' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:34.402939+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.102:0/454444027' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:34.420758+0000 mon.a (mon.0) 783 : audit [DBG] from='client.? 192.168.123.102:0/1508695649' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:34.420758+0000 mon.a (mon.0) 783 : audit [DBG] from='client.? 192.168.123.102:0/1508695649' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:34.429320+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.102:0/3290327928' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:34.429320+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.102:0/3290327928' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:34.471240+0000 mon.c (mon.1) 30 : audit [DBG] from='client.? 192.168.123.102:0/4219736797' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:34.471240+0000 mon.c (mon.1) 30 : audit [DBG] from='client.? 192.168.123.102:0/4219736797' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:34.479770+0000 mon.a (mon.0) 784 : audit [DBG] from='client.? 192.168.123.102:0/2066549119' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:34.876 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[17457]: audit 2026-03-08T23:26:34.479770+0000 mon.a (mon.0) 784 : audit [DBG] from='client.? 192.168.123.102:0/2066549119' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:34.957 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:34] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:34.957 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:34 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:34] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:33.945173+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.102:0/2966454438' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:33.945173+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.102:0/2966454438' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:33.963228+0000 mon.a (mon.0) 779 : audit [DBG] from='client.? 192.168.123.102:0/2905843596' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:33.963228+0000 mon.a (mon.0) 779 : audit [DBG] from='client.? 192.168.123.102:0/2905843596' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:33.972077+0000 mon.a (mon.0) 780 : audit [DBG] from='client.? 192.168.123.102:0/135499737' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:33.972077+0000 mon.a (mon.0) 780 : audit [DBG] from='client.? 192.168.123.102:0/135499737' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:34.015396+0000 mon.a (mon.0) 781 : audit [DBG] from='client.? 192.168.123.102:0/933228805' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:34.015396+0000 mon.a (mon.0) 781 : audit [DBG] from='client.? 192.168.123.102:0/933228805' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:34.023931+0000 mon.a (mon.0) 782 : audit [DBG] from='client.? 192.168.123.102:0/4242522743' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:34.023931+0000 mon.a (mon.0) 782 : audit [DBG] from='client.? 192.168.123.102:0/4242522743' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:34.402939+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.102:0/454444027' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:34.402939+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.102:0/454444027' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:34.420758+0000 mon.a (mon.0) 783 : audit [DBG] from='client.? 192.168.123.102:0/1508695649' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:34.420758+0000 mon.a (mon.0) 783 : audit [DBG] from='client.? 192.168.123.102:0/1508695649' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:34.429320+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.102:0/3290327928' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:34.429320+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.102:0/3290327928' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:34.471240+0000 mon.c (mon.1) 30 : audit [DBG] from='client.? 192.168.123.102:0/4219736797' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:34.471240+0000 mon.c (mon.1) 30 : audit [DBG] from='client.? 192.168.123.102:0/4219736797' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:34.479770+0000 mon.a (mon.0) 784 : audit [DBG] from='client.? 192.168.123.102:0/2066549119' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:35.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:34 vm04 bash[19918]: audit 2026-03-08T23:26:34.479770+0000 mon.a (mon.0) 784 : audit [DBG] from='client.? 192.168.123.102:0/2066549119' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:33.945173+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.102:0/2966454438' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:33.945173+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.102:0/2966454438' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:33.963228+0000 mon.a (mon.0) 779 : audit [DBG] from='client.? 192.168.123.102:0/2905843596' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:33.963228+0000 mon.a (mon.0) 779 : audit [DBG] from='client.? 192.168.123.102:0/2905843596' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:33.972077+0000 mon.a (mon.0) 780 : audit [DBG] from='client.? 192.168.123.102:0/135499737' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:33.972077+0000 mon.a (mon.0) 780 : audit [DBG] from='client.? 192.168.123.102:0/135499737' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:34.015396+0000 mon.a (mon.0) 781 : audit [DBG] from='client.? 192.168.123.102:0/933228805' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:34.015396+0000 mon.a (mon.0) 781 : audit [DBG] from='client.? 192.168.123.102:0/933228805' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:34.023931+0000 mon.a (mon.0) 782 : audit [DBG] from='client.? 192.168.123.102:0/4242522743' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:34.023931+0000 mon.a (mon.0) 782 : audit [DBG] from='client.? 192.168.123.102:0/4242522743' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:34.402939+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.102:0/454444027' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:34.402939+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.102:0/454444027' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:34.420758+0000 mon.a (mon.0) 783 : audit [DBG] from='client.? 192.168.123.102:0/1508695649' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:34.420758+0000 mon.a (mon.0) 783 : audit [DBG] from='client.? 192.168.123.102:0/1508695649' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:34.429320+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.102:0/3290327928' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:34.429320+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.102:0/3290327928' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:34.471240+0000 mon.c (mon.1) 30 : audit [DBG] from='client.? 192.168.123.102:0/4219736797' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:34.471240+0000 mon.c (mon.1) 30 : audit [DBG] from='client.? 192.168.123.102:0/4219736797' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:34.479770+0000 mon.a (mon.0) 784 : audit [DBG] from='client.? 192.168.123.102:0/2066549119' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:35.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:34 vm10 bash[20034]: audit 2026-03-08T23:26:34.479770+0000 mon.a (mon.0) 784 : audit [DBG] from='client.? 192.168.123.102:0/2066549119' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:35.644 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:35] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:35.644 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:35] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:36.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: cluster 2026-03-08T23:26:34.259104+0000 mgr.x (mgr.14150) 455 : cluster [DBG] pgmap v365: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:36.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: cluster 2026-03-08T23:26:34.259104+0000 mgr.x (mgr.14150) 455 : cluster [DBG] pgmap v365: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:36.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:34.838750+0000 mon.a (mon.0) 785 : audit [DBG] from='client.? 192.168.123.102:0/3805749696' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:34.838750+0000 mon.a (mon.0) 785 : audit [DBG] from='client.? 192.168.123.102:0/3805749696' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:34.861524+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.102:0/3646151296' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:34.861524+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.102:0/3646151296' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:34.869723+0000 mon.c (mon.1) 31 : audit [DBG] from='client.? 192.168.123.102:0/2537252061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:34.869723+0000 mon.c (mon.1) 31 : audit [DBG] from='client.? 192.168.123.102:0/2537252061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:34.912548+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.102:0/2513285364' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:34.912548+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.102:0/2513285364' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:34.922403+0000 mon.a (mon.0) 787 : audit [DBG] from='client.? 192.168.123.102:0/4087063666' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:34.922403+0000 mon.a (mon.0) 787 : audit [DBG] from='client.? 192.168.123.102:0/4087063666' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:35.284522+0000 mon.a (mon.0) 788 : audit [DBG] from='client.? 192.168.123.102:0/4178238833' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:35.284522+0000 mon.a (mon.0) 788 : audit [DBG] from='client.? 192.168.123.102:0/4178238833' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:35.304220+0000 mon.c (mon.1) 32 : audit [DBG] from='client.? 192.168.123.102:0/3802732585' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:35.304220+0000 mon.c (mon.1) 32 : audit [DBG] from='client.? 192.168.123.102:0/3802732585' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:35.313653+0000 mon.a (mon.0) 789 : audit [DBG] from='client.? 192.168.123.102:0/219475772' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:35.313653+0000 mon.a (mon.0) 789 : audit [DBG] from='client.? 192.168.123.102:0/219475772' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:35.358499+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.102:0/4240596339' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:35.358499+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.102:0/4240596339' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:35.366933+0000 mon.a (mon.0) 790 : audit [DBG] from='client.? 192.168.123.102:0/2804202506' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:35 vm04 bash[19918]: audit 2026-03-08T23:26:35.366933+0000 mon.a (mon.0) 790 : audit [DBG] from='client.? 192.168.123.102:0/2804202506' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: cluster 2026-03-08T23:26:34.259104+0000 mgr.x (mgr.14150) 455 : cluster [DBG] pgmap v365: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: cluster 2026-03-08T23:26:34.259104+0000 mgr.x (mgr.14150) 455 : cluster [DBG] pgmap v365: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:34.838750+0000 mon.a (mon.0) 785 : audit [DBG] from='client.? 192.168.123.102:0/3805749696' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:34.838750+0000 mon.a (mon.0) 785 : audit [DBG] from='client.? 192.168.123.102:0/3805749696' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:34.861524+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.102:0/3646151296' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:34.861524+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.102:0/3646151296' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:34.869723+0000 mon.c (mon.1) 31 : audit [DBG] from='client.? 192.168.123.102:0/2537252061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:34.869723+0000 mon.c (mon.1) 31 : audit [DBG] from='client.? 192.168.123.102:0/2537252061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:34.912548+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.102:0/2513285364' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:34.912548+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.102:0/2513285364' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:34.922403+0000 mon.a (mon.0) 787 : audit [DBG] from='client.? 192.168.123.102:0/4087063666' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:34.922403+0000 mon.a (mon.0) 787 : audit [DBG] from='client.? 192.168.123.102:0/4087063666' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:35.284522+0000 mon.a (mon.0) 788 : audit [DBG] from='client.? 192.168.123.102:0/4178238833' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:35.284522+0000 mon.a (mon.0) 788 : audit [DBG] from='client.? 192.168.123.102:0/4178238833' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:35.304220+0000 mon.c (mon.1) 32 : audit [DBG] from='client.? 192.168.123.102:0/3802732585' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:35.304220+0000 mon.c (mon.1) 32 : audit [DBG] from='client.? 192.168.123.102:0/3802732585' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:35.313653+0000 mon.a (mon.0) 789 : audit [DBG] from='client.? 192.168.123.102:0/219475772' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:35.313653+0000 mon.a (mon.0) 789 : audit [DBG] from='client.? 192.168.123.102:0/219475772' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:35.358499+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.102:0/4240596339' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:35.358499+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.102:0/4240596339' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:35.366933+0000 mon.a (mon.0) 790 : audit [DBG] from='client.? 192.168.123.102:0/2804202506' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[17457]: audit 2026-03-08T23:26:35.366933+0000 mon.a (mon.0) 790 : audit [DBG] from='client.? 192.168.123.102:0/2804202506' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.145 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:35] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:36.145 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:35 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:35] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: cluster 2026-03-08T23:26:34.259104+0000 mgr.x (mgr.14150) 455 : cluster [DBG] pgmap v365: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: cluster 2026-03-08T23:26:34.259104+0000 mgr.x (mgr.14150) 455 : cluster [DBG] pgmap v365: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:34.838750+0000 mon.a (mon.0) 785 : audit [DBG] from='client.? 192.168.123.102:0/3805749696' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:34.838750+0000 mon.a (mon.0) 785 : audit [DBG] from='client.? 192.168.123.102:0/3805749696' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:34.861524+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.102:0/3646151296' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:34.861524+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.102:0/3646151296' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:34.869723+0000 mon.c (mon.1) 31 : audit [DBG] from='client.? 192.168.123.102:0/2537252061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:34.869723+0000 mon.c (mon.1) 31 : audit [DBG] from='client.? 192.168.123.102:0/2537252061' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:34.912548+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.102:0/2513285364' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:34.912548+0000 mon.a (mon.0) 786 : audit [DBG] from='client.? 192.168.123.102:0/2513285364' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:34.922403+0000 mon.a (mon.0) 787 : audit [DBG] from='client.? 192.168.123.102:0/4087063666' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:34.922403+0000 mon.a (mon.0) 787 : audit [DBG] from='client.? 192.168.123.102:0/4087063666' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:35.284522+0000 mon.a (mon.0) 788 : audit [DBG] from='client.? 192.168.123.102:0/4178238833' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:35.284522+0000 mon.a (mon.0) 788 : audit [DBG] from='client.? 192.168.123.102:0/4178238833' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:35.304220+0000 mon.c (mon.1) 32 : audit [DBG] from='client.? 192.168.123.102:0/3802732585' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:35.304220+0000 mon.c (mon.1) 32 : audit [DBG] from='client.? 192.168.123.102:0/3802732585' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:35.313653+0000 mon.a (mon.0) 789 : audit [DBG] from='client.? 192.168.123.102:0/219475772' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:35.313653+0000 mon.a (mon.0) 789 : audit [DBG] from='client.? 192.168.123.102:0/219475772' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:35.358499+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.102:0/4240596339' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:35.358499+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.102:0/4240596339' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:35.366933+0000 mon.a (mon.0) 790 : audit [DBG] from='client.? 192.168.123.102:0/2804202506' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:35 vm10 bash[20034]: audit 2026-03-08T23:26:35.366933+0000 mon.a (mon.0) 790 : audit [DBG] from='client.? 192.168.123.102:0/2804202506' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.613 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:36] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:36.613 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:36] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:35.710856+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.102:0/2874863348' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:35.710856+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.102:0/2874863348' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:35.736919+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.102:0/1835597632' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:35.736919+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.102:0/1835597632' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:35.754973+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.102:0/2872287607' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:35.754973+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.102:0/2872287607' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:35.797985+0000 mon.b (mon.2) 29 : audit [DBG] from='client.? 192.168.123.102:0/2356391057' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:35.797985+0000 mon.b (mon.2) 29 : audit [DBG] from='client.? 192.168.123.102:0/2356391057' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:35.804769+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.102:0/3105433503' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:35.804769+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.102:0/3105433503' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.155804+0000 mon.c (mon.1) 33 : audit [DBG] from='client.? 192.168.123.102:0/3613173601' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.155804+0000 mon.c (mon.1) 33 : audit [DBG] from='client.? 192.168.123.102:0/3613173601' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.173861+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.102:0/1845466754' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.173861+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.102:0/1845466754' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.181779+0000 mon.a (mon.0) 794 : audit [DBG] from='client.? 192.168.123.102:0/3098456042' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.181779+0000 mon.a (mon.0) 794 : audit [DBG] from='client.? 192.168.123.102:0/3098456042' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.224612+0000 mon.b (mon.2) 30 : audit [DBG] from='client.? 192.168.123.102:0/2938211856' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.224612+0000 mon.b (mon.2) 30 : audit [DBG] from='client.? 192.168.123.102:0/2938211856' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.233248+0000 mon.c (mon.1) 35 : audit [DBG] from='client.? 192.168.123.102:0/1158797597' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.233248+0000 mon.c (mon.1) 35 : audit [DBG] from='client.? 192.168.123.102:0/1158797597' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.583624+0000 mon.a (mon.0) 795 : audit [DBG] from='client.? 192.168.123.102:0/2651784808' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.583624+0000 mon.a (mon.0) 795 : audit [DBG] from='client.? 192.168.123.102:0/2651784808' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:36.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.601413+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.102:0/1756262548' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.601413+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.102:0/1756262548' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:36.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.608670+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.102:0/1300956888' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.608670+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.102:0/1300956888' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.649445+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.102:0/1524599759' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.649445+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.102:0/1524599759' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:36.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.657071+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.102:0/801683301' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[17457]: audit 2026-03-08T23:26:36.657071+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.102:0/801683301' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:36.895 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:36] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:36.895 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:36 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:36] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:37.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:35.710856+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.102:0/2874863348' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:37.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:35.710856+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.102:0/2874863348' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:37.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:35.736919+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.102:0/1835597632' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:37.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:35.736919+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.102:0/1835597632' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:37.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:35.754973+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.102:0/2872287607' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:35.754973+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.102:0/2872287607' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:35.797985+0000 mon.b (mon.2) 29 : audit [DBG] from='client.? 192.168.123.102:0/2356391057' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:35.797985+0000 mon.b (mon.2) 29 : audit [DBG] from='client.? 192.168.123.102:0/2356391057' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:35.804769+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.102:0/3105433503' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:35.804769+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.102:0/3105433503' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.155804+0000 mon.c (mon.1) 33 : audit [DBG] from='client.? 192.168.123.102:0/3613173601' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.155804+0000 mon.c (mon.1) 33 : audit [DBG] from='client.? 192.168.123.102:0/3613173601' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.173861+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.102:0/1845466754' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.173861+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.102:0/1845466754' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.181779+0000 mon.a (mon.0) 794 : audit [DBG] from='client.? 192.168.123.102:0/3098456042' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.181779+0000 mon.a (mon.0) 794 : audit [DBG] from='client.? 192.168.123.102:0/3098456042' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.224612+0000 mon.b (mon.2) 30 : audit [DBG] from='client.? 192.168.123.102:0/2938211856' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.224612+0000 mon.b (mon.2) 30 : audit [DBG] from='client.? 192.168.123.102:0/2938211856' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.233248+0000 mon.c (mon.1) 35 : audit [DBG] from='client.? 192.168.123.102:0/1158797597' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.233248+0000 mon.c (mon.1) 35 : audit [DBG] from='client.? 192.168.123.102:0/1158797597' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.583624+0000 mon.a (mon.0) 795 : audit [DBG] from='client.? 192.168.123.102:0/2651784808' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.583624+0000 mon.a (mon.0) 795 : audit [DBG] from='client.? 192.168.123.102:0/2651784808' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.601413+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.102:0/1756262548' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.601413+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.102:0/1756262548' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.608670+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.102:0/1300956888' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.608670+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.102:0/1300956888' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.649445+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.102:0/1524599759' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.649445+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.102:0/1524599759' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.657071+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.102:0/801683301' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:37.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:36 vm04 bash[19918]: audit 2026-03-08T23:26:36.657071+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.102:0/801683301' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:35.710856+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.102:0/2874863348' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:35.710856+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.102:0/2874863348' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:35.736919+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.102:0/1835597632' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:35.736919+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.102:0/1835597632' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:35.754973+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.102:0/2872287607' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:35.754973+0000 mon.a (mon.0) 792 : audit [DBG] from='client.? 192.168.123.102:0/2872287607' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:35.797985+0000 mon.b (mon.2) 29 : audit [DBG] from='client.? 192.168.123.102:0/2356391057' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:35.797985+0000 mon.b (mon.2) 29 : audit [DBG] from='client.? 192.168.123.102:0/2356391057' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:35.804769+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.102:0/3105433503' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:35.804769+0000 mon.a (mon.0) 793 : audit [DBG] from='client.? 192.168.123.102:0/3105433503' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.155804+0000 mon.c (mon.1) 33 : audit [DBG] from='client.? 192.168.123.102:0/3613173601' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.155804+0000 mon.c (mon.1) 33 : audit [DBG] from='client.? 192.168.123.102:0/3613173601' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.173861+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.102:0/1845466754' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.173861+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.102:0/1845466754' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.181779+0000 mon.a (mon.0) 794 : audit [DBG] from='client.? 192.168.123.102:0/3098456042' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.181779+0000 mon.a (mon.0) 794 : audit [DBG] from='client.? 192.168.123.102:0/3098456042' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.224612+0000 mon.b (mon.2) 30 : audit [DBG] from='client.? 192.168.123.102:0/2938211856' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.224612+0000 mon.b (mon.2) 30 : audit [DBG] from='client.? 192.168.123.102:0/2938211856' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.233248+0000 mon.c (mon.1) 35 : audit [DBG] from='client.? 192.168.123.102:0/1158797597' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.233248+0000 mon.c (mon.1) 35 : audit [DBG] from='client.? 192.168.123.102:0/1158797597' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.583624+0000 mon.a (mon.0) 795 : audit [DBG] from='client.? 192.168.123.102:0/2651784808' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.583624+0000 mon.a (mon.0) 795 : audit [DBG] from='client.? 192.168.123.102:0/2651784808' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.601413+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.102:0/1756262548' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.601413+0000 mon.a (mon.0) 796 : audit [DBG] from='client.? 192.168.123.102:0/1756262548' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.608670+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.102:0/1300956888' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.608670+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.102:0/1300956888' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.649445+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.102:0/1524599759' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.649445+0000 mon.a (mon.0) 798 : audit [DBG] from='client.? 192.168.123.102:0/1524599759' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.657071+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.102:0/801683301' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:37.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:36 vm10 bash[20034]: audit 2026-03-08T23:26:36.657071+0000 mon.a (mon.0) 799 : audit [DBG] from='client.? 192.168.123.102:0/801683301' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:37.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:37] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:37.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:37] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: cluster 2026-03-08T23:26:36.259413+0000 mgr.x (mgr.14150) 456 : cluster [DBG] pgmap v366: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.7 KiB/s rd, 341 B/s wr, 5 op/s 2026-03-08T23:26:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: cluster 2026-03-08T23:26:36.259413+0000 mgr.x (mgr.14150) 456 : cluster [DBG] pgmap v366: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.7 KiB/s rd, 341 B/s wr, 5 op/s 2026-03-08T23:26:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.022443+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.102:0/4276610019' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.022443+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.102:0/4276610019' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.049327+0000 mon.c (mon.1) 36 : audit [DBG] from='client.? 192.168.123.102:0/98791492' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.049327+0000 mon.c (mon.1) 36 : audit [DBG] from='client.? 192.168.123.102:0/98791492' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.058262+0000 mon.c (mon.1) 37 : audit [DBG] from='client.? 192.168.123.102:0/902094012' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.058262+0000 mon.c (mon.1) 37 : audit [DBG] from='client.? 192.168.123.102:0/902094012' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.100926+0000 mon.c (mon.1) 38 : audit [DBG] from='client.? 192.168.123.102:0/795521312' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.100926+0000 mon.c (mon.1) 38 : audit [DBG] from='client.? 192.168.123.102:0/795521312' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.109526+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.102:0/265882475' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.109526+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.102:0/265882475' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.468414+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.102:0/474113742' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.468414+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.102:0/474113742' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:37.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.491944+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.102:0/1421804874' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:37.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.491944+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.102:0/1421804874' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:37.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.500428+0000 mon.a (mon.0) 803 : audit [DBG] from='client.? 192.168.123.102:0/4020683107' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.500428+0000 mon.a (mon.0) 803 : audit [DBG] from='client.? 192.168.123.102:0/4020683107' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.543674+0000 mon.a (mon.0) 804 : audit [DBG] from='client.? 192.168.123.102:0/2986737190' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.543674+0000 mon.a (mon.0) 804 : audit [DBG] from='client.? 192.168.123.102:0/2986737190' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:37.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.553151+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.102:0/1051614020' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:37.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[17457]: audit 2026-03-08T23:26:37.553151+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.102:0/1051614020' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:37.895 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:37] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:37.895 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:37] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: cluster 2026-03-08T23:26:36.259413+0000 mgr.x (mgr.14150) 456 : cluster [DBG] pgmap v366: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.7 KiB/s rd, 341 B/s wr, 5 op/s 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: cluster 2026-03-08T23:26:36.259413+0000 mgr.x (mgr.14150) 456 : cluster [DBG] pgmap v366: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.7 KiB/s rd, 341 B/s wr, 5 op/s 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.022443+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.102:0/4276610019' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.022443+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.102:0/4276610019' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.049327+0000 mon.c (mon.1) 36 : audit [DBG] from='client.? 192.168.123.102:0/98791492' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.049327+0000 mon.c (mon.1) 36 : audit [DBG] from='client.? 192.168.123.102:0/98791492' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.058262+0000 mon.c (mon.1) 37 : audit [DBG] from='client.? 192.168.123.102:0/902094012' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.058262+0000 mon.c (mon.1) 37 : audit [DBG] from='client.? 192.168.123.102:0/902094012' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.100926+0000 mon.c (mon.1) 38 : audit [DBG] from='client.? 192.168.123.102:0/795521312' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.100926+0000 mon.c (mon.1) 38 : audit [DBG] from='client.? 192.168.123.102:0/795521312' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.109526+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.102:0/265882475' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.109526+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.102:0/265882475' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.468414+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.102:0/474113742' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.468414+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.102:0/474113742' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.491944+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.102:0/1421804874' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.491944+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.102:0/1421804874' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.500428+0000 mon.a (mon.0) 803 : audit [DBG] from='client.? 192.168.123.102:0/4020683107' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.500428+0000 mon.a (mon.0) 803 : audit [DBG] from='client.? 192.168.123.102:0/4020683107' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.543674+0000 mon.a (mon.0) 804 : audit [DBG] from='client.? 192.168.123.102:0/2986737190' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.543674+0000 mon.a (mon.0) 804 : audit [DBG] from='client.? 192.168.123.102:0/2986737190' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.553151+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.102:0/1051614020' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:38.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:37 vm04 bash[19918]: audit 2026-03-08T23:26:37.553151+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.102:0/1051614020' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: cluster 2026-03-08T23:26:36.259413+0000 mgr.x (mgr.14150) 456 : cluster [DBG] pgmap v366: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.7 KiB/s rd, 341 B/s wr, 5 op/s 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: cluster 2026-03-08T23:26:36.259413+0000 mgr.x (mgr.14150) 456 : cluster [DBG] pgmap v366: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.7 KiB/s rd, 341 B/s wr, 5 op/s 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.022443+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.102:0/4276610019' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.022443+0000 mon.a (mon.0) 800 : audit [DBG] from='client.? 192.168.123.102:0/4276610019' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.049327+0000 mon.c (mon.1) 36 : audit [DBG] from='client.? 192.168.123.102:0/98791492' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.049327+0000 mon.c (mon.1) 36 : audit [DBG] from='client.? 192.168.123.102:0/98791492' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.058262+0000 mon.c (mon.1) 37 : audit [DBG] from='client.? 192.168.123.102:0/902094012' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.058262+0000 mon.c (mon.1) 37 : audit [DBG] from='client.? 192.168.123.102:0/902094012' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.100926+0000 mon.c (mon.1) 38 : audit [DBG] from='client.? 192.168.123.102:0/795521312' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.100926+0000 mon.c (mon.1) 38 : audit [DBG] from='client.? 192.168.123.102:0/795521312' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.109526+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.102:0/265882475' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.109526+0000 mon.a (mon.0) 801 : audit [DBG] from='client.? 192.168.123.102:0/265882475' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.468414+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.102:0/474113742' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.468414+0000 mon.a (mon.0) 802 : audit [DBG] from='client.? 192.168.123.102:0/474113742' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.491944+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.102:0/1421804874' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.491944+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.102:0/1421804874' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.500428+0000 mon.a (mon.0) 803 : audit [DBG] from='client.? 192.168.123.102:0/4020683107' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.500428+0000 mon.a (mon.0) 803 : audit [DBG] from='client.? 192.168.123.102:0/4020683107' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.543674+0000 mon.a (mon.0) 804 : audit [DBG] from='client.? 192.168.123.102:0/2986737190' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.543674+0000 mon.a (mon.0) 804 : audit [DBG] from='client.? 192.168.123.102:0/2986737190' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.553151+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.102:0/1051614020' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:38.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:37 vm10 bash[20034]: audit 2026-03-08T23:26:37.553151+0000 mon.a (mon.0) 805 : audit [DBG] from='client.? 192.168.123.102:0/1051614020' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:38.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:37] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:38.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:37 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:37] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:38.755 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:38] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:38.755 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:38] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:39.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:37.938284+0000 mon.a (mon.0) 806 : audit [DBG] from='client.? 192.168.123.102:0/1264571562' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:39.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:37.938284+0000 mon.a (mon.0) 806 : audit [DBG] from='client.? 192.168.123.102:0/1264571562' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:39.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:37.958850+0000 mon.a (mon.0) 807 : audit [DBG] from='client.? 192.168.123.102:0/1388299002' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:39.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:37.958850+0000 mon.a (mon.0) 807 : audit [DBG] from='client.? 192.168.123.102:0/1388299002' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:39.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:37.968285+0000 mon.a (mon.0) 808 : audit [DBG] from='client.? 192.168.123.102:0/3519856029' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:37.968285+0000 mon.a (mon.0) 808 : audit [DBG] from='client.? 192.168.123.102:0/3519856029' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:38.010707+0000 mon.b (mon.2) 32 : audit [DBG] from='client.? 192.168.123.102:0/2633776939' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:38.010707+0000 mon.b (mon.2) 32 : audit [DBG] from='client.? 192.168.123.102:0/2633776939' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:38.019282+0000 mon.a (mon.0) 809 : audit [DBG] from='client.? 192.168.123.102:0/267527931' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:39.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:38.019282+0000 mon.a (mon.0) 809 : audit [DBG] from='client.? 192.168.123.102:0/267527931' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:39.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:38.369826+0000 mon.a (mon.0) 810 : audit [DBG] from='client.? 192.168.123.102:0/2930652846' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:39.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:38.369826+0000 mon.a (mon.0) 810 : audit [DBG] from='client.? 192.168.123.102:0/2930652846' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:39.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:38.390198+0000 mon.c (mon.1) 39 : audit [DBG] from='client.? 192.168.123.102:0/3861818009' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:39.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:38.390198+0000 mon.c (mon.1) 39 : audit [DBG] from='client.? 192.168.123.102:0/3861818009' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:39.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:38.398359+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.102:0/1205385457' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:38.398359+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.102:0/1205385457' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:38.439441+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.102:0/4158493925' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:38.439441+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.102:0/4158493925' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:38.448106+0000 mon.c (mon.1) 40 : audit [DBG] from='client.? 192.168.123.102:0/70965021' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:39.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:38 vm04 bash[19918]: audit 2026-03-08T23:26:38.448106+0000 mon.c (mon.1) 40 : audit [DBG] from='client.? 192.168.123.102:0/70965021' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:39.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:37.938284+0000 mon.a (mon.0) 806 : audit [DBG] from='client.? 192.168.123.102:0/1264571562' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:39.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:37.938284+0000 mon.a (mon.0) 806 : audit [DBG] from='client.? 192.168.123.102:0/1264571562' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:39.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:37.958850+0000 mon.a (mon.0) 807 : audit [DBG] from='client.? 192.168.123.102:0/1388299002' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:39.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:37.958850+0000 mon.a (mon.0) 807 : audit [DBG] from='client.? 192.168.123.102:0/1388299002' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:39.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:37.968285+0000 mon.a (mon.0) 808 : audit [DBG] from='client.? 192.168.123.102:0/3519856029' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:37.968285+0000 mon.a (mon.0) 808 : audit [DBG] from='client.? 192.168.123.102:0/3519856029' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:38.010707+0000 mon.b (mon.2) 32 : audit [DBG] from='client.? 192.168.123.102:0/2633776939' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:38.010707+0000 mon.b (mon.2) 32 : audit [DBG] from='client.? 192.168.123.102:0/2633776939' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:38.019282+0000 mon.a (mon.0) 809 : audit [DBG] from='client.? 192.168.123.102:0/267527931' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:39.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:38.019282+0000 mon.a (mon.0) 809 : audit [DBG] from='client.? 192.168.123.102:0/267527931' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:39.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:38.369826+0000 mon.a (mon.0) 810 : audit [DBG] from='client.? 192.168.123.102:0/2930652846' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:39.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:38.369826+0000 mon.a (mon.0) 810 : audit [DBG] from='client.? 192.168.123.102:0/2930652846' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:39.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:38.390198+0000 mon.c (mon.1) 39 : audit [DBG] from='client.? 192.168.123.102:0/3861818009' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:39.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:38.390198+0000 mon.c (mon.1) 39 : audit [DBG] from='client.? 192.168.123.102:0/3861818009' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:39.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:38.398359+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.102:0/1205385457' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:38.398359+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.102:0/1205385457' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:38.439441+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.102:0/4158493925' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:38.439441+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.102:0/4158493925' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:38.448106+0000 mon.c (mon.1) 40 : audit [DBG] from='client.? 192.168.123.102:0/70965021' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:39.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[17457]: audit 2026-03-08T23:26:38.448106+0000 mon.c (mon.1) 40 : audit [DBG] from='client.? 192.168.123.102:0/70965021' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:39.145 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:38] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:39.145 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:38 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:38] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:37.938284+0000 mon.a (mon.0) 806 : audit [DBG] from='client.? 192.168.123.102:0/1264571562' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:37.938284+0000 mon.a (mon.0) 806 : audit [DBG] from='client.? 192.168.123.102:0/1264571562' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:37.958850+0000 mon.a (mon.0) 807 : audit [DBG] from='client.? 192.168.123.102:0/1388299002' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:37.958850+0000 mon.a (mon.0) 807 : audit [DBG] from='client.? 192.168.123.102:0/1388299002' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:37.968285+0000 mon.a (mon.0) 808 : audit [DBG] from='client.? 192.168.123.102:0/3519856029' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:37.968285+0000 mon.a (mon.0) 808 : audit [DBG] from='client.? 192.168.123.102:0/3519856029' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:38.010707+0000 mon.b (mon.2) 32 : audit [DBG] from='client.? 192.168.123.102:0/2633776939' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:38.010707+0000 mon.b (mon.2) 32 : audit [DBG] from='client.? 192.168.123.102:0/2633776939' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:38.019282+0000 mon.a (mon.0) 809 : audit [DBG] from='client.? 192.168.123.102:0/267527931' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:38.019282+0000 mon.a (mon.0) 809 : audit [DBG] from='client.? 192.168.123.102:0/267527931' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:38.369826+0000 mon.a (mon.0) 810 : audit [DBG] from='client.? 192.168.123.102:0/2930652846' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:38.369826+0000 mon.a (mon.0) 810 : audit [DBG] from='client.? 192.168.123.102:0/2930652846' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:38.390198+0000 mon.c (mon.1) 39 : audit [DBG] from='client.? 192.168.123.102:0/3861818009' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:38.390198+0000 mon.c (mon.1) 39 : audit [DBG] from='client.? 192.168.123.102:0/3861818009' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:38.398359+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.102:0/1205385457' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:38.398359+0000 mon.a (mon.0) 811 : audit [DBG] from='client.? 192.168.123.102:0/1205385457' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:38.439441+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.102:0/4158493925' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:38.439441+0000 mon.a (mon.0) 812 : audit [DBG] from='client.? 192.168.123.102:0/4158493925' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:38.448106+0000 mon.c (mon.1) 40 : audit [DBG] from='client.? 192.168.123.102:0/70965021' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:39.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:38 vm10 bash[20034]: audit 2026-03-08T23:26:38.448106+0000 mon.c (mon.1) 40 : audit [DBG] from='client.? 192.168.123.102:0/70965021' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:39.644 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:39] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:39.644 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:39] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: cluster 2026-03-08T23:26:38.259682+0000 mgr.x (mgr.14150) 457 : cluster [DBG] pgmap v367: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.9 KiB/s rd, 341 B/s wr, 4 op/s 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: cluster 2026-03-08T23:26:38.259682+0000 mgr.x (mgr.14150) 457 : cluster [DBG] pgmap v367: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.9 KiB/s rd, 341 B/s wr, 4 op/s 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:38.802195+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.102:0/3722496598' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:38.802195+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.102:0/3722496598' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:38.820912+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.102:0/548083550' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:38.820912+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.102:0/548083550' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:38.829629+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.102:0/3671966946' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:38.829629+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.102:0/3671966946' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:38.874751+0000 mon.a (mon.0) 816 : audit [DBG] from='client.? 192.168.123.102:0/1847102499' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:38.874751+0000 mon.a (mon.0) 816 : audit [DBG] from='client.? 192.168.123.102:0/1847102499' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:38.882925+0000 mon.a (mon.0) 817 : audit [DBG] from='client.? 192.168.123.102:0/4014626664' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:38.882925+0000 mon.a (mon.0) 817 : audit [DBG] from='client.? 192.168.123.102:0/4014626664' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.236738+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.102:0/570191218' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.236738+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.102:0/570191218' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.255853+0000 mon.b (mon.2) 33 : audit [DBG] from='client.? 192.168.123.102:0/3264398584' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.255853+0000 mon.b (mon.2) 33 : audit [DBG] from='client.? 192.168.123.102:0/3264398584' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.265319+0000 mon.a (mon.0) 819 : audit [DBG] from='client.? 192.168.123.102:0/2728369362' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.265319+0000 mon.a (mon.0) 819 : audit [DBG] from='client.? 192.168.123.102:0/2728369362' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.306470+0000 mon.a (mon.0) 820 : audit [DBG] from='client.? 192.168.123.102:0/83938991' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.306470+0000 mon.a (mon.0) 820 : audit [DBG] from='client.? 192.168.123.102:0/83938991' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.315605+0000 mon.a (mon.0) 821 : audit [DBG] from='client.? 192.168.123.102:0/1573337854' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.315605+0000 mon.a (mon.0) 821 : audit [DBG] from='client.? 192.168.123.102:0/1573337854' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.674196+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.102:0/1638723733' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.674196+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.102:0/1638723733' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.694715+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.102:0/1216526593' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.694715+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.102:0/1216526593' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.703132+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.102:0/4046579299' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.703132+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.102:0/4046579299' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.746242+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.102:0/2831382579' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.746242+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.102:0/2831382579' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.754397+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.102:0/2466719317' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:39 vm04 bash[19918]: audit 2026-03-08T23:26:39.754397+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.102:0/2466719317' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: cluster 2026-03-08T23:26:38.259682+0000 mgr.x (mgr.14150) 457 : cluster [DBG] pgmap v367: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.9 KiB/s rd, 341 B/s wr, 4 op/s 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: cluster 2026-03-08T23:26:38.259682+0000 mgr.x (mgr.14150) 457 : cluster [DBG] pgmap v367: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.9 KiB/s rd, 341 B/s wr, 4 op/s 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:38.802195+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.102:0/3722496598' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:38.802195+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.102:0/3722496598' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:38.820912+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.102:0/548083550' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:38.820912+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.102:0/548083550' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:38.829629+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.102:0/3671966946' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:38.829629+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.102:0/3671966946' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:38.874751+0000 mon.a (mon.0) 816 : audit [DBG] from='client.? 192.168.123.102:0/1847102499' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:38.874751+0000 mon.a (mon.0) 816 : audit [DBG] from='client.? 192.168.123.102:0/1847102499' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:38.882925+0000 mon.a (mon.0) 817 : audit [DBG] from='client.? 192.168.123.102:0/4014626664' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:38.882925+0000 mon.a (mon.0) 817 : audit [DBG] from='client.? 192.168.123.102:0/4014626664' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.236738+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.102:0/570191218' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.236738+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.102:0/570191218' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.255853+0000 mon.b (mon.2) 33 : audit [DBG] from='client.? 192.168.123.102:0/3264398584' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.255853+0000 mon.b (mon.2) 33 : audit [DBG] from='client.? 192.168.123.102:0/3264398584' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.265319+0000 mon.a (mon.0) 819 : audit [DBG] from='client.? 192.168.123.102:0/2728369362' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.265319+0000 mon.a (mon.0) 819 : audit [DBG] from='client.? 192.168.123.102:0/2728369362' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.306470+0000 mon.a (mon.0) 820 : audit [DBG] from='client.? 192.168.123.102:0/83938991' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.306470+0000 mon.a (mon.0) 820 : audit [DBG] from='client.? 192.168.123.102:0/83938991' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.315605+0000 mon.a (mon.0) 821 : audit [DBG] from='client.? 192.168.123.102:0/1573337854' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.315605+0000 mon.a (mon.0) 821 : audit [DBG] from='client.? 192.168.123.102:0/1573337854' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.674196+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.102:0/1638723733' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.674196+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.102:0/1638723733' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.694715+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.102:0/1216526593' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.694715+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.102:0/1216526593' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.703132+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.102:0/4046579299' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.703132+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.102:0/4046579299' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.746242+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.102:0/2831382579' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.746242+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.102:0/2831382579' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.754397+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.102:0/2466719317' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[17457]: audit 2026-03-08T23:26:39.754397+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.102:0/2466719317' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:39] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:40.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:39 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:39] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:40.135 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: cluster 2026-03-08T23:26:38.259682+0000 mgr.x (mgr.14150) 457 : cluster [DBG] pgmap v367: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.9 KiB/s rd, 341 B/s wr, 4 op/s 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: cluster 2026-03-08T23:26:38.259682+0000 mgr.x (mgr.14150) 457 : cluster [DBG] pgmap v367: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.9 KiB/s rd, 341 B/s wr, 4 op/s 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:38.802195+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.102:0/3722496598' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:38.802195+0000 mon.a (mon.0) 813 : audit [DBG] from='client.? 192.168.123.102:0/3722496598' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:38.820912+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.102:0/548083550' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:38.820912+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.102:0/548083550' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:38.829629+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.102:0/3671966946' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:38.829629+0000 mon.a (mon.0) 815 : audit [DBG] from='client.? 192.168.123.102:0/3671966946' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:38.874751+0000 mon.a (mon.0) 816 : audit [DBG] from='client.? 192.168.123.102:0/1847102499' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:38.874751+0000 mon.a (mon.0) 816 : audit [DBG] from='client.? 192.168.123.102:0/1847102499' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:38.882925+0000 mon.a (mon.0) 817 : audit [DBG] from='client.? 192.168.123.102:0/4014626664' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:38.882925+0000 mon.a (mon.0) 817 : audit [DBG] from='client.? 192.168.123.102:0/4014626664' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.236738+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.102:0/570191218' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.236738+0000 mon.a (mon.0) 818 : audit [DBG] from='client.? 192.168.123.102:0/570191218' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.255853+0000 mon.b (mon.2) 33 : audit [DBG] from='client.? 192.168.123.102:0/3264398584' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.255853+0000 mon.b (mon.2) 33 : audit [DBG] from='client.? 192.168.123.102:0/3264398584' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.265319+0000 mon.a (mon.0) 819 : audit [DBG] from='client.? 192.168.123.102:0/2728369362' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.265319+0000 mon.a (mon.0) 819 : audit [DBG] from='client.? 192.168.123.102:0/2728369362' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.306470+0000 mon.a (mon.0) 820 : audit [DBG] from='client.? 192.168.123.102:0/83938991' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.306470+0000 mon.a (mon.0) 820 : audit [DBG] from='client.? 192.168.123.102:0/83938991' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.315605+0000 mon.a (mon.0) 821 : audit [DBG] from='client.? 192.168.123.102:0/1573337854' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.315605+0000 mon.a (mon.0) 821 : audit [DBG] from='client.? 192.168.123.102:0/1573337854' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.674196+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.102:0/1638723733' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.674196+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.102:0/1638723733' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.694715+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.102:0/1216526593' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.694715+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.102:0/1216526593' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.703132+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.102:0/4046579299' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.703132+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.102:0/4046579299' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.746242+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.102:0/2831382579' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.746242+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.102:0/2831382579' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.754397+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.102:0/2466719317' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:39 vm10 bash[20034]: audit 2026-03-08T23:26:39.754397+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.102:0/2466719317' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:40] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:40.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:40] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:40.603 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:40] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:40.603 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:40] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.120816+0000 mon.a (mon.0) 827 : audit [DBG] from='client.? 192.168.123.102:0/874338587' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.120816+0000 mon.a (mon.0) 827 : audit [DBG] from='client.? 192.168.123.102:0/874338587' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.134965+0000 mgr.x (mgr.14150) 458 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.134965+0000 mgr.x (mgr.14150) 458 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.141024+0000 mon.b (mon.2) 34 : audit [DBG] from='client.? 192.168.123.102:0/899019122' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.141024+0000 mon.b (mon.2) 34 : audit [DBG] from='client.? 192.168.123.102:0/899019122' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.149565+0000 mon.a (mon.0) 828 : audit [DBG] from='client.? 192.168.123.102:0/285097935' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.149565+0000 mon.a (mon.0) 828 : audit [DBG] from='client.? 192.168.123.102:0/285097935' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.190983+0000 mon.b (mon.2) 35 : audit [DBG] from='client.? 192.168.123.102:0/612889499' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.190983+0000 mon.b (mon.2) 35 : audit [DBG] from='client.? 192.168.123.102:0/612889499' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.198325+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.102:0/1761965413' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.198325+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.102:0/1761965413' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.571438+0000 mon.a (mon.0) 829 : audit [DBG] from='client.? 192.168.123.102:0/2222942225' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.571438+0000 mon.a (mon.0) 829 : audit [DBG] from='client.? 192.168.123.102:0/2222942225' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:40.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.589421+0000 mon.a (mon.0) 830 : audit [DBG] from='client.? 192.168.123.102:0/3876931864' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.589421+0000 mon.a (mon.0) 830 : audit [DBG] from='client.? 192.168.123.102:0/3876931864' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:40.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.598603+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.102:0/2109549609' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.598603+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.102:0/2109549609' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.640343+0000 mon.a (mon.0) 831 : audit [DBG] from='client.? 192.168.123.102:0/713657918' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.640343+0000 mon.a (mon.0) 831 : audit [DBG] from='client.? 192.168.123.102:0/713657918' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:40.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.648767+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.102:0/3444479278' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:40.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:40 vm02 bash[17457]: audit 2026-03-08T23:26:40.648767+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.102:0/3444479278' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:41.097 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.120816+0000 mon.a (mon.0) 827 : audit [DBG] from='client.? 192.168.123.102:0/874338587' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.120816+0000 mon.a (mon.0) 827 : audit [DBG] from='client.? 192.168.123.102:0/874338587' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.134965+0000 mgr.x (mgr.14150) 458 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.134965+0000 mgr.x (mgr.14150) 458 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.141024+0000 mon.b (mon.2) 34 : audit [DBG] from='client.? 192.168.123.102:0/899019122' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.141024+0000 mon.b (mon.2) 34 : audit [DBG] from='client.? 192.168.123.102:0/899019122' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.149565+0000 mon.a (mon.0) 828 : audit [DBG] from='client.? 192.168.123.102:0/285097935' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.149565+0000 mon.a (mon.0) 828 : audit [DBG] from='client.? 192.168.123.102:0/285097935' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.190983+0000 mon.b (mon.2) 35 : audit [DBG] from='client.? 192.168.123.102:0/612889499' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.190983+0000 mon.b (mon.2) 35 : audit [DBG] from='client.? 192.168.123.102:0/612889499' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.198325+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.102:0/1761965413' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.198325+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.102:0/1761965413' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.571438+0000 mon.a (mon.0) 829 : audit [DBG] from='client.? 192.168.123.102:0/2222942225' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.571438+0000 mon.a (mon.0) 829 : audit [DBG] from='client.? 192.168.123.102:0/2222942225' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.589421+0000 mon.a (mon.0) 830 : audit [DBG] from='client.? 192.168.123.102:0/3876931864' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.589421+0000 mon.a (mon.0) 830 : audit [DBG] from='client.? 192.168.123.102:0/3876931864' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.598603+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.102:0/2109549609' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.598603+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.102:0/2109549609' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.640343+0000 mon.a (mon.0) 831 : audit [DBG] from='client.? 192.168.123.102:0/713657918' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.640343+0000 mon.a (mon.0) 831 : audit [DBG] from='client.? 192.168.123.102:0/713657918' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.648767+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.102:0/3444479278' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:41.098 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:40 vm10 bash[20034]: audit 2026-03-08T23:26:40.648767+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.102:0/3444479278' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.120816+0000 mon.a (mon.0) 827 : audit [DBG] from='client.? 192.168.123.102:0/874338587' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.120816+0000 mon.a (mon.0) 827 : audit [DBG] from='client.? 192.168.123.102:0/874338587' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.134965+0000 mgr.x (mgr.14150) 458 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.134965+0000 mgr.x (mgr.14150) 458 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.141024+0000 mon.b (mon.2) 34 : audit [DBG] from='client.? 192.168.123.102:0/899019122' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.141024+0000 mon.b (mon.2) 34 : audit [DBG] from='client.? 192.168.123.102:0/899019122' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.149565+0000 mon.a (mon.0) 828 : audit [DBG] from='client.? 192.168.123.102:0/285097935' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.149565+0000 mon.a (mon.0) 828 : audit [DBG] from='client.? 192.168.123.102:0/285097935' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.190983+0000 mon.b (mon.2) 35 : audit [DBG] from='client.? 192.168.123.102:0/612889499' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.190983+0000 mon.b (mon.2) 35 : audit [DBG] from='client.? 192.168.123.102:0/612889499' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.198325+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.102:0/1761965413' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.198325+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.102:0/1761965413' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.571438+0000 mon.a (mon.0) 829 : audit [DBG] from='client.? 192.168.123.102:0/2222942225' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.571438+0000 mon.a (mon.0) 829 : audit [DBG] from='client.? 192.168.123.102:0/2222942225' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.589421+0000 mon.a (mon.0) 830 : audit [DBG] from='client.? 192.168.123.102:0/3876931864' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.589421+0000 mon.a (mon.0) 830 : audit [DBG] from='client.? 192.168.123.102:0/3876931864' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.598603+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.102:0/2109549609' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.598603+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.102:0/2109549609' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.640343+0000 mon.a (mon.0) 831 : audit [DBG] from='client.? 192.168.123.102:0/713657918' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.640343+0000 mon.a (mon.0) 831 : audit [DBG] from='client.? 192.168.123.102:0/713657918' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.648767+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.102:0/3444479278' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:41.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:40 vm04 bash[19918]: audit 2026-03-08T23:26:40.648767+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.102:0/3444479278' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:41.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:41] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:41.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:41] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:41.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:41 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:26:41.793 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:41] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:41.794 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:41] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:/home/ubuntu/cephtest/archive/cram.client.0/gwcli_create.t: failed 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:--- gwcli_create.t 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:+++ gwcli_create.t.err 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:@@ -17,35 +17,29 @@ 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: ============================= 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli disks/ create pool=datapool image=block0 size=300M wwn=36001405da17b74481464e9fa968746d3 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls disks/ | grep 'o- disks' | awk -F'[' '{print $2}' 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:- 300M, Disks: 1] 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:+ 0.00Y, Disks: 0] 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls disks/ | grep 'o- datapool' | awk -F'[' '{print $2}' 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:- datapool (300M)] 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls disks/ | grep 'o- block0' | awk -F'[' '{print $2}' 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:- datapool/block0 (Unknown, 300M)] 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: Create the target IQN 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: ===================== 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli iscsi-targets/ create target_iqn=iqn.2003-01.com.redhat.iscsi-gw:ceph-gw 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- iscsi-targets' | awk -F'[' '{print $2}' 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:- DiscoveryAuth: None, Targets: 1] 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:+ DiscoveryAuth: None, Targets: 0] 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- iqn.2003-01.com.redhat.iscsi-gw:ceph-gw' | awk -F'[' '{print $2}' 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:- Auth: None, Gateways: 0] 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- disks' | awk -F'[' '{print $2}' 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:- Disks: 0] 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- gateways' | awk -F'[' '{print $2}' 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:- Up: 0/0, Portals: 0] 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- host-groups' | awk -F'[' '{print $2}' 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:- Groups : 0] 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- hosts' | awk -F'[' '{print $2}' 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:- Auth: ACL_ENABLED, Hosts: 0] 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: Create the first gateway 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: ======================== 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: $ HOST=$(python3 -c "import socket; print(socket.getfqdn())") 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: > IP=`hostname -i | awk '{print $1}'` 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: > sudo $CENGINE exec $ISCSI_CONTAINER gwcli iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw/gateways create ip_addresses=$IP gateway_name=$HOST 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:+ No such path /iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:+ [255] 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- gateways' | awk -F'[' '{print $2}' 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:- Up: 1/1, Portals: 1] 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: Create the second gateway 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: ======================== 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout:@@ -59,27 +53,29 @@ 2026-03-08T23:26:41.958 INFO:tasks.cram.client.0.vm02.stdout: > HOST=$(python3 -c "import socket; print(socket.getfqdn('$IP'))") 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: > sudo $CENGINE exec $ISCSI_CONTAINER gwcli iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw/gateways create ip_addresses=$IP gateway_name=$HOST 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: > fi 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout:+ No such path /iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout:+ [255] 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- gateways' | awk -F'[' '{print $2}' 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout:- Up: 2/2, Portals: 2] 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: Attach the disk 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: =============== 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw/disks/ add disk=datapool/block0 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout:+ No such path /iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout:+ [255] 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- disks' | awk -F'[' '{print $2}' 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout:- Disks: 1] 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: Create a host 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: ============= 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw/hosts create client_iqn=iqn.1994-05.com.redhat:client 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout:+ No such path /iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout:+ [255] 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- hosts' | awk -F'[' '{print $2}' 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout:- Auth: ACL_ENABLED, Hosts: 1] 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- iqn.1994-05.com.redhat:client' | awk -F'[' '{print $2}' 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout:- Auth: None, Disks: 0(0.00Y)] 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: Map the LUN 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: =========== 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw/hosts/iqn.1994-05.com.redhat:client disk disk=datapool/block0 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout:+ No such path /iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:ceph-gw 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout:+ [255] 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- hosts' | awk -F'[' '{print $2}' 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout:- Auth: ACL_ENABLED, Hosts: 1] 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout: $ sudo $CENGINE exec $ISCSI_CONTAINER gwcli ls iscsi-targets/ | grep 'o- iqn.1994-05.com.redhat:client' | awk -F'[' '{print $2}' 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout:- Auth: None, Disks: 1(300M)] 2026-03-08T23:26:41.959 INFO:tasks.cram.client.0.vm02.stdout:# Ran 1 tests, 0 skipped, 1 failed. 2026-03-08T23:26:41.960 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:26:41.960 DEBUG:teuthology.orchestra.run.vm02:> test -f /home/ubuntu/cephtest/archive/cram.client.0/gwcli_create.t.err || rm -f -- /home/ubuntu/cephtest/archive/cram.client.0/gwcli_create.t 2026-03-08T23:26:42.004 DEBUG:teuthology.orchestra.run.vm02:> rm -rf -- /home/ubuntu/cephtest/virtualenv /home/ubuntu/cephtest/clone.client.0 ; rmdir --ignore-fail-on-non-empty /home/ubuntu/cephtest/archive/cram.client.0 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: cluster 2026-03-08T23:26:40.260185+0000 mgr.x (mgr.14150) 459 : cluster [DBG] pgmap v368: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.7 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: cluster 2026-03-08T23:26:40.260185+0000 mgr.x (mgr.14150) 459 : cluster [DBG] pgmap v368: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.7 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:40.995643+0000 mon.a (mon.0) 832 : audit [DBG] from='client.? 192.168.123.102:0/3546444774' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:40.995643+0000 mon.a (mon.0) 832 : audit [DBG] from='client.? 192.168.123.102:0/3546444774' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.013331+0000 mon.a (mon.0) 833 : audit [DBG] from='client.? 192.168.123.102:0/2178238226' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.013331+0000 mon.a (mon.0) 833 : audit [DBG] from='client.? 192.168.123.102:0/2178238226' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.020305+0000 mon.a (mon.0) 834 : audit [DBG] from='client.? 192.168.123.102:0/4159300914' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.020305+0000 mon.a (mon.0) 834 : audit [DBG] from='client.? 192.168.123.102:0/4159300914' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.061332+0000 mon.b (mon.2) 39 : audit [DBG] from='client.? 192.168.123.102:0/1446920034' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.061332+0000 mon.b (mon.2) 39 : audit [DBG] from='client.? 192.168.123.102:0/1446920034' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.067619+0000 mon.a (mon.0) 835 : audit [DBG] from='client.? 192.168.123.102:0/4282351196' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.067619+0000 mon.a (mon.0) 835 : audit [DBG] from='client.? 192.168.123.102:0/4282351196' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.097431+0000 mgr.x (mgr.14150) 460 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.097431+0000 mgr.x (mgr.14150) 460 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.411013+0000 mon.b (mon.2) 40 : audit [DBG] from='client.? 192.168.123.102:0/3909107248' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.411013+0000 mon.b (mon.2) 40 : audit [DBG] from='client.? 192.168.123.102:0/3909107248' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.428971+0000 mon.a (mon.0) 836 : audit [DBG] from='client.? 192.168.123.102:0/1919498079' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.428971+0000 mon.a (mon.0) 836 : audit [DBG] from='client.? 192.168.123.102:0/1919498079' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.437375+0000 mon.a (mon.0) 837 : audit [DBG] from='client.? 192.168.123.102:0/2891759880' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.437375+0000 mon.a (mon.0) 837 : audit [DBG] from='client.? 192.168.123.102:0/2891759880' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.477818+0000 mon.b (mon.2) 41 : audit [DBG] from='client.? 192.168.123.102:0/1217837813' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.477818+0000 mon.b (mon.2) 41 : audit [DBG] from='client.? 192.168.123.102:0/1217837813' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.485071+0000 mon.a (mon.0) 838 : audit [DBG] from='client.? 192.168.123.102:0/2313234202' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:42.125 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:41 vm04 bash[19918]: audit 2026-03-08T23:26:41.485071+0000 mon.a (mon.0) 838 : audit [DBG] from='client.? 192.168.123.102:0/2313234202' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[37757]: debug ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:41] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:42.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[37757]: ::ffff:127.0.0.1 - - [08/Mar/2026 23:26:41] "GET /api/config HTTP/1.1" 200 - 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: cluster 2026-03-08T23:26:40.260185+0000 mgr.x (mgr.14150) 459 : cluster [DBG] pgmap v368: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.7 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: cluster 2026-03-08T23:26:40.260185+0000 mgr.x (mgr.14150) 459 : cluster [DBG] pgmap v368: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.7 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:40.995643+0000 mon.a (mon.0) 832 : audit [DBG] from='client.? 192.168.123.102:0/3546444774' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:40.995643+0000 mon.a (mon.0) 832 : audit [DBG] from='client.? 192.168.123.102:0/3546444774' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.013331+0000 mon.a (mon.0) 833 : audit [DBG] from='client.? 192.168.123.102:0/2178238226' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.013331+0000 mon.a (mon.0) 833 : audit [DBG] from='client.? 192.168.123.102:0/2178238226' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.020305+0000 mon.a (mon.0) 834 : audit [DBG] from='client.? 192.168.123.102:0/4159300914' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.020305+0000 mon.a (mon.0) 834 : audit [DBG] from='client.? 192.168.123.102:0/4159300914' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.061332+0000 mon.b (mon.2) 39 : audit [DBG] from='client.? 192.168.123.102:0/1446920034' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.061332+0000 mon.b (mon.2) 39 : audit [DBG] from='client.? 192.168.123.102:0/1446920034' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.067619+0000 mon.a (mon.0) 835 : audit [DBG] from='client.? 192.168.123.102:0/4282351196' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.067619+0000 mon.a (mon.0) 835 : audit [DBG] from='client.? 192.168.123.102:0/4282351196' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.097431+0000 mgr.x (mgr.14150) 460 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.097431+0000 mgr.x (mgr.14150) 460 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.411013+0000 mon.b (mon.2) 40 : audit [DBG] from='client.? 192.168.123.102:0/3909107248' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.411013+0000 mon.b (mon.2) 40 : audit [DBG] from='client.? 192.168.123.102:0/3909107248' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.428971+0000 mon.a (mon.0) 836 : audit [DBG] from='client.? 192.168.123.102:0/1919498079' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.428971+0000 mon.a (mon.0) 836 : audit [DBG] from='client.? 192.168.123.102:0/1919498079' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.437375+0000 mon.a (mon.0) 837 : audit [DBG] from='client.? 192.168.123.102:0/2891759880' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.437375+0000 mon.a (mon.0) 837 : audit [DBG] from='client.? 192.168.123.102:0/2891759880' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.477818+0000 mon.b (mon.2) 41 : audit [DBG] from='client.? 192.168.123.102:0/1217837813' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.477818+0000 mon.b (mon.2) 41 : audit [DBG] from='client.? 192.168.123.102:0/1217837813' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.485071+0000 mon.a (mon.0) 838 : audit [DBG] from='client.? 192.168.123.102:0/2313234202' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:42.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:41 vm02 bash[17457]: audit 2026-03-08T23:26:41.485071+0000 mon.a (mon.0) 838 : audit [DBG] from='client.? 192.168.123.102:0/2313234202' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: cluster 2026-03-08T23:26:40.260185+0000 mgr.x (mgr.14150) 459 : cluster [DBG] pgmap v368: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.7 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: cluster 2026-03-08T23:26:40.260185+0000 mgr.x (mgr.14150) 459 : cluster [DBG] pgmap v368: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.7 KiB/s rd, 341 B/s wr, 7 op/s 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:40.995643+0000 mon.a (mon.0) 832 : audit [DBG] from='client.? 192.168.123.102:0/3546444774' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:40.995643+0000 mon.a (mon.0) 832 : audit [DBG] from='client.? 192.168.123.102:0/3546444774' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.013331+0000 mon.a (mon.0) 833 : audit [DBG] from='client.? 192.168.123.102:0/2178238226' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.013331+0000 mon.a (mon.0) 833 : audit [DBG] from='client.? 192.168.123.102:0/2178238226' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.020305+0000 mon.a (mon.0) 834 : audit [DBG] from='client.? 192.168.123.102:0/4159300914' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.020305+0000 mon.a (mon.0) 834 : audit [DBG] from='client.? 192.168.123.102:0/4159300914' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.061332+0000 mon.b (mon.2) 39 : audit [DBG] from='client.? 192.168.123.102:0/1446920034' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.061332+0000 mon.b (mon.2) 39 : audit [DBG] from='client.? 192.168.123.102:0/1446920034' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.067619+0000 mon.a (mon.0) 835 : audit [DBG] from='client.? 192.168.123.102:0/4282351196' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.067619+0000 mon.a (mon.0) 835 : audit [DBG] from='client.? 192.168.123.102:0/4282351196' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.097431+0000 mgr.x (mgr.14150) 460 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.097431+0000 mgr.x (mgr.14150) 460 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.411013+0000 mon.b (mon.2) 40 : audit [DBG] from='client.? 192.168.123.102:0/3909107248' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.411013+0000 mon.b (mon.2) 40 : audit [DBG] from='client.? 192.168.123.102:0/3909107248' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.428971+0000 mon.a (mon.0) 836 : audit [DBG] from='client.? 192.168.123.102:0/1919498079' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.428971+0000 mon.a (mon.0) 836 : audit [DBG] from='client.? 192.168.123.102:0/1919498079' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.437375+0000 mon.a (mon.0) 837 : audit [DBG] from='client.? 192.168.123.102:0/2891759880' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.437375+0000 mon.a (mon.0) 837 : audit [DBG] from='client.? 192.168.123.102:0/2891759880' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.477818+0000 mon.b (mon.2) 41 : audit [DBG] from='client.? 192.168.123.102:0/1217837813' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.477818+0000 mon.b (mon.2) 41 : audit [DBG] from='client.? 192.168.123.102:0/1217837813' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.485071+0000 mon.a (mon.0) 838 : audit [DBG] from='client.? 192.168.123.102:0/2313234202' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:42.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:41 vm10 bash[20034]: audit 2026-03-08T23:26:41.485071+0000 mon.a (mon.0) 838 : audit [DBG] from='client.? 192.168.123.102:0/2313234202' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:42.486 DEBUG:teuthology.orchestra.run.vm04:> test -f /home/ubuntu/cephtest/archive/cram.client.1/iscsi_client.t.err || rm -f -- /home/ubuntu/cephtest/archive/cram.client.1/iscsi_client.t 2026-03-08T23:26:42.490 DEBUG:teuthology.orchestra.run.vm04:> rm -rf -- /home/ubuntu/cephtest/virtualenv /home/ubuntu/cephtest/clone.client.1 ; rmdir --ignore-fail-on-non-empty /home/ubuntu/cephtest/archive/cram.client.1 2026-03-08T23:26:42.989 DEBUG:teuthology.orchestra.run.vm10:> test -f /home/ubuntu/cephtest/archive/cram.client.2/gwcli_delete.t.err || rm -f -- /home/ubuntu/cephtest/archive/cram.client.2/gwcli_delete.t 2026-03-08T23:26:42.993 DEBUG:teuthology.orchestra.run.vm10:> rm -rf -- /home/ubuntu/cephtest/virtualenv /home/ubuntu/cephtest/clone.client.2 ; rmdir --ignore-fail-on-non-empty /home/ubuntu/cephtest/archive/cram.client.2 2026-03-08T23:26:43.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:42 vm04 bash[19918]: audit 2026-03-08T23:26:41.842284+0000 mon.c (mon.1) 41 : audit [DBG] from='client.? 192.168.123.102:0/1507752903' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:43.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:42 vm04 bash[19918]: audit 2026-03-08T23:26:41.842284+0000 mon.c (mon.1) 41 : audit [DBG] from='client.? 192.168.123.102:0/1507752903' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:43.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:42 vm04 bash[19918]: audit 2026-03-08T23:26:41.861558+0000 mon.c (mon.1) 42 : audit [DBG] from='client.? 192.168.123.102:0/1032413943' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:43.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:42 vm04 bash[19918]: audit 2026-03-08T23:26:41.861558+0000 mon.c (mon.1) 42 : audit [DBG] from='client.? 192.168.123.102:0/1032413943' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:43.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:42 vm04 bash[19918]: audit 2026-03-08T23:26:41.871156+0000 mon.c (mon.1) 43 : audit [DBG] from='client.? 192.168.123.102:0/3434471313' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:43.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:42 vm04 bash[19918]: audit 2026-03-08T23:26:41.871156+0000 mon.c (mon.1) 43 : audit [DBG] from='client.? 192.168.123.102:0/3434471313' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:43.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:42 vm04 bash[19918]: audit 2026-03-08T23:26:41.911980+0000 mon.a (mon.0) 839 : audit [DBG] from='client.? 192.168.123.102:0/3424005618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:43.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:42 vm04 bash[19918]: audit 2026-03-08T23:26:41.911980+0000 mon.a (mon.0) 839 : audit [DBG] from='client.? 192.168.123.102:0/3424005618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:43.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:42 vm04 bash[19918]: audit 2026-03-08T23:26:41.919224+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.102:0/2572526070' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:43.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:42 vm04 bash[19918]: audit 2026-03-08T23:26:41.919224+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.102:0/2572526070' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:43.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:42 vm02 bash[17457]: audit 2026-03-08T23:26:41.842284+0000 mon.c (mon.1) 41 : audit [DBG] from='client.? 192.168.123.102:0/1507752903' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:43.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:42 vm02 bash[17457]: audit 2026-03-08T23:26:41.842284+0000 mon.c (mon.1) 41 : audit [DBG] from='client.? 192.168.123.102:0/1507752903' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:43.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:42 vm02 bash[17457]: audit 2026-03-08T23:26:41.861558+0000 mon.c (mon.1) 42 : audit [DBG] from='client.? 192.168.123.102:0/1032413943' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:43.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:42 vm02 bash[17457]: audit 2026-03-08T23:26:41.861558+0000 mon.c (mon.1) 42 : audit [DBG] from='client.? 192.168.123.102:0/1032413943' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:43.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:42 vm02 bash[17457]: audit 2026-03-08T23:26:41.871156+0000 mon.c (mon.1) 43 : audit [DBG] from='client.? 192.168.123.102:0/3434471313' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:43.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:42 vm02 bash[17457]: audit 2026-03-08T23:26:41.871156+0000 mon.c (mon.1) 43 : audit [DBG] from='client.? 192.168.123.102:0/3434471313' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:43.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:42 vm02 bash[17457]: audit 2026-03-08T23:26:41.911980+0000 mon.a (mon.0) 839 : audit [DBG] from='client.? 192.168.123.102:0/3424005618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:43.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:42 vm02 bash[17457]: audit 2026-03-08T23:26:41.911980+0000 mon.a (mon.0) 839 : audit [DBG] from='client.? 192.168.123.102:0/3424005618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:43.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:42 vm02 bash[17457]: audit 2026-03-08T23:26:41.919224+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.102:0/2572526070' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:43.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:42 vm02 bash[17457]: audit 2026-03-08T23:26:41.919224+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.102:0/2572526070' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:43.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:42 vm10 bash[20034]: audit 2026-03-08T23:26:41.842284+0000 mon.c (mon.1) 41 : audit [DBG] from='client.? 192.168.123.102:0/1507752903' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:43.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:42 vm10 bash[20034]: audit 2026-03-08T23:26:41.842284+0000 mon.c (mon.1) 41 : audit [DBG] from='client.? 192.168.123.102:0/1507752903' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "version"}]: dispatch 2026-03-08T23:26:43.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:42 vm10 bash[20034]: audit 2026-03-08T23:26:41.861558+0000 mon.c (mon.1) 42 : audit [DBG] from='client.? 192.168.123.102:0/1032413943' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:43.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:42 vm10 bash[20034]: audit 2026-03-08T23:26:41.861558+0000 mon.c (mon.1) 42 : audit [DBG] from='client.? 192.168.123.102:0/1032413943' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:26:43.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:42 vm10 bash[20034]: audit 2026-03-08T23:26:41.871156+0000 mon.c (mon.1) 43 : audit [DBG] from='client.? 192.168.123.102:0/3434471313' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:43.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:42 vm10 bash[20034]: audit 2026-03-08T23:26:41.871156+0000 mon.c (mon.1) 43 : audit [DBG] from='client.? 192.168.123.102:0/3434471313' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:43.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:42 vm10 bash[20034]: audit 2026-03-08T23:26:41.911980+0000 mon.a (mon.0) 839 : audit [DBG] from='client.? 192.168.123.102:0/3424005618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:43.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:42 vm10 bash[20034]: audit 2026-03-08T23:26:41.911980+0000 mon.a (mon.0) 839 : audit [DBG] from='client.? 192.168.123.102:0/3424005618' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "status", "format": "json"}]: dispatch 2026-03-08T23:26:43.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:42 vm10 bash[20034]: audit 2026-03-08T23:26:41.919224+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.102:0/2572526070' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:43.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:42 vm10 bash[20034]: audit 2026-03-08T23:26:41.919224+0000 mon.a (mon.0) 840 : audit [DBG] from='client.? 192.168.123.102:0/2572526070' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "df", "format": "json"}]: dispatch 2026-03-08T23:26:43.440 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 105, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 83, in run_one_task return task(**kwargs) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cram.py", line 97, in task _run_tests(ctx, role) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cram.py", line 147, in _run_tests remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm02 with status 1: 'CEPH_REF=master CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram -v -- /home/ubuntu/cephtest/archive/cram.client.0/*.t' 2026-03-08T23:26:43.441 DEBUG:teuthology.run_tasks:Unwinding manager ceph_iscsi_client 2026-03-08T23:26:43.443 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-08T23:26:43.445 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-08T23:26:43.445 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-08T23:26:43.446 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-08T23:26:43.447 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-08T23:26:43.474 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-08T23:26:43.474 DEBUG:teuthology.orchestra.run.vm02:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-08T23:26:43.479 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-08T23:26:43.479 DEBUG:teuthology.orchestra.run.vm04:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-08T23:26:43.485 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-08T23:26:43.485 DEBUG:teuthology.orchestra.run.vm10:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-08T23:26:43.531 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:26:43.548 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:26:43.549 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:26:43.758 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:26:43.759 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:26:43.783 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:26:43.784 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:26:43.796 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:26:43.796 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:26:43.821 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:43 vm10 bash[20034]: cluster 2026-03-08T23:26:42.260420+0000 mgr.x (mgr.14150) 461 : cluster [DBG] pgmap v369: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.9 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-08T23:26:43.821 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:43 vm10 bash[20034]: cluster 2026-03-08T23:26:42.260420+0000 mgr.x (mgr.14150) 461 : cluster [DBG] pgmap v369: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.9 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-08T23:26:43.821 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:43 vm02 bash[17457]: cluster 2026-03-08T23:26:42.260420+0000 mgr.x (mgr.14150) 461 : cluster [DBG] pgmap v369: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.9 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-08T23:26:43.821 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:43 vm04 bash[19918]: cluster 2026-03-08T23:26:42.260420+0000 mgr.x (mgr.14150) 461 : cluster [DBG] pgmap v369: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.9 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-08T23:26:43.821 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:43 vm04 bash[19918]: cluster 2026-03-08T23:26:42.260420+0000 mgr.x (mgr.14150) 461 : cluster [DBG] pgmap v369: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.9 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-08T23:26:43.984 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:26:43.985 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:26:43.985 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:26:44.001 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-08T23:26:44.002 INFO:teuthology.orchestra.run.vm02.stdout: ceph* 2026-03-08T23:26:44.041 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:26:44.042 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:26:44.042 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:26:44.056 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:26:44.058 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:26:44.058 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:26:44.061 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-08T23:26:44.062 INFO:teuthology.orchestra.run.vm04.stdout: ceph* 2026-03-08T23:26:44.076 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-08T23:26:44.078 INFO:teuthology.orchestra.run.vm10.stdout: ceph* 2026-03-08T23:26:44.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:43 vm02 bash[17457]: cluster 2026-03-08T23:26:42.260420+0000 mgr.x (mgr.14150) 461 : cluster [DBG] pgmap v369: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.9 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-08T23:26:44.206 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:26:44.206 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-08T23:26:44.251 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118698 files and directories currently installed.) 2026-03-08T23:26:44.254 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:44.259 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:26:44.259 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-08T23:26:44.271 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:26:44.271 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-08T23:26:44.307 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118698 files and directories currently installed.) 2026-03-08T23:26:44.310 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:44.319 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118698 files and directories currently installed.) 2026-03-08T23:26:44.321 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:45.658 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:26:45.661 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:26:45.697 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:26:45.698 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:26:45.713 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:26:45.748 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:26:45.889 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:26:45.889 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:26:45.909 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:26:45.909 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:26:45.932 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:26:45.932 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:26:46.069 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:26:46.070 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:26:46.070 INFO:teuthology.orchestra.run.vm04.stdout: python-asyncssh-doc python3-asyncssh 2026-03-08T23:26:46.070 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:26:46.086 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-08T23:26:46.087 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm* cephadm* 2026-03-08T23:26:46.107 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:26:46.109 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:26:46.109 INFO:teuthology.orchestra.run.vm02.stdout: python-asyncssh-doc python3-asyncssh 2026-03-08T23:26:46.109 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:26:46.117 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:26:46.119 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:26:46.119 INFO:teuthology.orchestra.run.vm10.stdout: python-asyncssh-doc python3-asyncssh 2026-03-08T23:26:46.119 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:26:46.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:45 vm04 bash[19918]: cluster 2026-03-08T23:26:44.260708+0000 mgr.x (mgr.14150) 462 : cluster [DBG] pgmap v370: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.9 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-08T23:26:46.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:45 vm04 bash[19918]: cluster 2026-03-08T23:26:44.260708+0000 mgr.x (mgr.14150) 462 : cluster [DBG] pgmap v370: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.9 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-08T23:26:46.127 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-08T23:26:46.129 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-cephadm* cephadm* 2026-03-08T23:26:46.135 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-08T23:26:46.137 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-cephadm* cephadm* 2026-03-08T23:26:46.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:45 vm02 bash[17457]: cluster 2026-03-08T23:26:44.260708+0000 mgr.x (mgr.14150) 462 : cluster [DBG] pgmap v370: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.9 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-08T23:26:46.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:45 vm02 bash[17457]: cluster 2026-03-08T23:26:44.260708+0000 mgr.x (mgr.14150) 462 : cluster [DBG] pgmap v370: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.9 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-08T23:26:46.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:45 vm10 bash[20034]: cluster 2026-03-08T23:26:44.260708+0000 mgr.x (mgr.14150) 462 : cluster [DBG] pgmap v370: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.9 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-08T23:26:46.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:45 vm10 bash[20034]: cluster 2026-03-08T23:26:44.260708+0000 mgr.x (mgr.14150) 462 : cluster [DBG] pgmap v370: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.9 KiB/s rd, 341 B/s wr, 6 op/s 2026-03-08T23:26:46.275 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-08T23:26:46.275 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-08T23:26:46.317 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118696 files and directories currently installed.) 2026-03-08T23:26:46.318 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-08T23:26:46.318 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-08T23:26:46.319 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:46.320 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-08T23:26:46.320 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-08T23:26:46.342 INFO:teuthology.orchestra.run.vm04.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:46.367 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118696 files and directories currently installed.) 2026-03-08T23:26:46.368 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118696 files and directories currently installed.) 2026-03-08T23:26:46.370 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:46.371 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:46.372 INFO:teuthology.orchestra.run.vm04.stdout:Looking for files to backup/remove ... 2026-03-08T23:26:46.374 INFO:teuthology.orchestra.run.vm04.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-08T23:26:46.377 INFO:teuthology.orchestra.run.vm04.stdout:Removing user `cephadm' ... 2026-03-08T23:26:46.377 INFO:teuthology.orchestra.run.vm04.stdout:Warning: group `nogroup' has no more members. 2026-03-08T23:26:46.388 INFO:teuthology.orchestra.run.vm10.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:46.388 INFO:teuthology.orchestra.run.vm04.stdout:Done. 2026-03-08T23:26:46.390 INFO:teuthology.orchestra.run.vm02.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:46.412 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:26:46.417 INFO:teuthology.orchestra.run.vm10.stdout:Looking for files to backup/remove ... 2026-03-08T23:26:46.419 INFO:teuthology.orchestra.run.vm10.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-08T23:26:46.419 INFO:teuthology.orchestra.run.vm02.stdout:Looking for files to backup/remove ... 2026-03-08T23:26:46.420 INFO:teuthology.orchestra.run.vm10.stdout:Removing user `cephadm' ... 2026-03-08T23:26:46.421 INFO:teuthology.orchestra.run.vm10.stdout:Warning: group `nogroup' has no more members. 2026-03-08T23:26:46.421 INFO:teuthology.orchestra.run.vm02.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-08T23:26:46.423 INFO:teuthology.orchestra.run.vm02.stdout:Removing user `cephadm' ... 2026-03-08T23:26:46.423 INFO:teuthology.orchestra.run.vm02.stdout:Warning: group `nogroup' has no more members. 2026-03-08T23:26:46.434 INFO:teuthology.orchestra.run.vm10.stdout:Done. 2026-03-08T23:26:46.436 INFO:teuthology.orchestra.run.vm02.stdout:Done. 2026-03-08T23:26:46.459 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:26:46.460 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:26:46.538 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118622 files and directories currently installed.) 2026-03-08T23:26:46.541 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:46.584 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118622 files and directories currently installed.) 2026-03-08T23:26:46.586 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118622 files and directories currently installed.) 2026-03-08T23:26:46.587 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:46.589 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:47.888 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:26:47.889 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:26:47.922 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:26:47.924 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:26:47.938 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:26:47.975 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:26:48.122 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:26:48.122 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:26:48.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:47 vm04 bash[19918]: cluster 2026-03-08T23:26:46.261011+0000 mgr.x (mgr.14150) 463 : cluster [DBG] pgmap v371: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 341 B/s wr, 8 op/s 2026-03-08T23:26:48.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:47 vm04 bash[19918]: cluster 2026-03-08T23:26:46.261011+0000 mgr.x (mgr.14150) 463 : cluster [DBG] pgmap v371: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 341 B/s wr, 8 op/s 2026-03-08T23:26:48.144 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:26:48.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:47 vm02 bash[17457]: cluster 2026-03-08T23:26:46.261011+0000 mgr.x (mgr.14150) 463 : cluster [DBG] pgmap v371: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 341 B/s wr, 8 op/s 2026-03-08T23:26:48.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:47 vm02 bash[17457]: cluster 2026-03-08T23:26:46.261011+0000 mgr.x (mgr.14150) 463 : cluster [DBG] pgmap v371: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 341 B/s wr, 8 op/s 2026-03-08T23:26:48.144 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:26:48.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:47 vm10 bash[20034]: cluster 2026-03-08T23:26:46.261011+0000 mgr.x (mgr.14150) 463 : cluster [DBG] pgmap v371: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 341 B/s wr, 8 op/s 2026-03-08T23:26:48.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:47 vm10 bash[20034]: cluster 2026-03-08T23:26:46.261011+0000 mgr.x (mgr.14150) 463 : cluster [DBG] pgmap v371: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.2 KiB/s rd, 341 B/s wr, 8 op/s 2026-03-08T23:26:48.203 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:26:48.204 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:26:48.379 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:26:48.380 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:26:48.380 INFO:teuthology.orchestra.run.vm02.stdout: python-asyncssh-doc python3-asyncssh 2026-03-08T23:26:48.380 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:26:48.396 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-08T23:26:48.398 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mds* 2026-03-08T23:26:48.407 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:26:48.408 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:26:48.409 INFO:teuthology.orchestra.run.vm04.stdout: python-asyncssh-doc python3-asyncssh 2026-03-08T23:26:48.409 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:26:48.431 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-08T23:26:48.432 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds* 2026-03-08T23:26:48.472 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:26:48.473 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mon libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:26:48.473 INFO:teuthology.orchestra.run.vm10.stdout: python-asyncssh-doc python3-asyncssh 2026-03-08T23:26:48.473 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:26:48.490 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-08T23:26:48.491 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mds* 2026-03-08T23:26:48.594 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:26:48.594 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-08T23:26:48.625 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:26:48.625 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-08T23:26:48.639 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118622 files and directories currently installed.) 2026-03-08T23:26:48.641 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:48.672 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118622 files and directories currently installed.) 2026-03-08T23:26:48.675 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:48.678 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:26:48.678 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-08T23:26:48.720 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118622 files and directories currently installed.) 2026-03-08T23:26:48.723 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:49.025 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:48 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.025 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:48 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.025 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:48 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.025 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:48 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:48 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.042 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:48 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.042 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:48 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.042 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:48 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.042 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:48 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.095 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:48 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.095 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:48 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.095 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:48 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.095 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:48 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.095 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:48 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.128 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:26:49.154 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:26:49.315 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:26:49.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:49 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.374 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:49 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.374 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:49 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.374 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:49 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.385 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118614 files and directories currently installed.) 2026-03-08T23:26:49.387 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:49.392 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118614 files and directories currently installed.) 2026-03-08T23:26:49.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:49 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.394 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:49 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.394 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:49 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:49 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.394 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:49 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.395 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:49.406 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:49 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.407 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:49 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.407 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:49 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:49 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.407 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:49 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.440 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118614 files and directories currently installed.) 2026-03-08T23:26:49.443 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:49.782 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:49 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.782 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:49 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.782 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:49 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.782 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:49 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.808 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:49 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.808 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:49 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.809 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:49 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.809 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:49 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.809 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:49 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.850 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:49 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.850 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:49 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.850 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:49 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.851 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:49 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:49.851 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:49 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:50.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:49 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:50.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:49 vm04 bash[19918]: cluster 2026-03-08T23:26:48.261289+0000 mgr.x (mgr.14150) 464 : cluster [DBG] pgmap v372: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 4 op/s 2026-03-08T23:26:50.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:49 vm04 bash[19918]: cluster 2026-03-08T23:26:48.261289+0000 mgr.x (mgr.14150) 464 : cluster [DBG] pgmap v372: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 4 op/s 2026-03-08T23:26:50.124 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:49 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:50.125 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:49 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:50.125 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:49 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:50.143 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:49 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:50.143 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:49 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:50.143 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:49 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:50.143 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:49 vm02 bash[17457]: cluster 2026-03-08T23:26:48.261289+0000 mgr.x (mgr.14150) 464 : cluster [DBG] pgmap v372: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 4 op/s 2026-03-08T23:26:50.143 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:49 vm02 bash[17457]: cluster 2026-03-08T23:26:48.261289+0000 mgr.x (mgr.14150) 464 : cluster [DBG] pgmap v372: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 4 op/s 2026-03-08T23:26:50.143 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:49 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:50.143 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:49 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:50.143 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:50 vm02 bash[37757]: debug there is no tcmu-runner data available 2026-03-08T23:26:50.157 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:49 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:50.157 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:49 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:50.157 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:49 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:50.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:49 vm10 bash[20034]: cluster 2026-03-08T23:26:48.261289+0000 mgr.x (mgr.14150) 464 : cluster [DBG] pgmap v372: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 4 op/s 2026-03-08T23:26:50.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:49 vm10 bash[20034]: cluster 2026-03-08T23:26:48.261289+0000 mgr.x (mgr.14150) 464 : cluster [DBG] pgmap v372: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 4 op/s 2026-03-08T23:26:50.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:49 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:50.157 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:49 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:51.100 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:26:51.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:50 vm04 bash[19918]: audit 2026-03-08T23:26:50.143042+0000 mgr.x (mgr.14150) 465 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:51.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:50 vm04 bash[19918]: audit 2026-03-08T23:26:50.143042+0000 mgr.x (mgr.14150) 465 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:51.138 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:26:51.143 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:26:51.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:50 vm02 bash[17457]: audit 2026-03-08T23:26:50.143042+0000 mgr.x (mgr.14150) 465 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:51.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:50 vm02 bash[17457]: audit 2026-03-08T23:26:50.143042+0000 mgr.x (mgr.14150) 465 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:51.156 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:26:51.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:50 vm10 bash[20034]: audit 2026-03-08T23:26:50.143042+0000 mgr.x (mgr.14150) 465 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:51.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:50 vm10 bash[20034]: audit 2026-03-08T23:26:50.143042+0000 mgr.x (mgr.14150) 465 : audit [DBG] from='client.24320 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:51.157 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:51 vm10 bash[39354]: debug there is no tcmu-runner data available 2026-03-08T23:26:51.181 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:26:51.192 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:26:51.407 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:26:51.408 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:26:51.418 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:26:51.419 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:26:51.441 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:26:51.441 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:26:51.530 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:26:51.530 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core ceph-mon libboost-iostreams1.74.0 2026-03-08T23:26:51.531 INFO:teuthology.orchestra.run.vm02.stdout: libboost-thread1.74.0 libpmemobj1 python-asyncssh-doc python-pastedeploy-tpl 2026-03-08T23:26:51.531 INFO:teuthology.orchestra.run.vm02.stdout: python3-asyncssh python3-cachetools python3-cheroot python3-cherrypy3 2026-03-08T23:26:51.531 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:26:51.531 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:26:51.531 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:26:51.531 INFO:teuthology.orchestra.run.vm02.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:26:51.531 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend python3-psutil python3-pyinotify python3-repoze.lru 2026-03-08T23:26:51.531 INFO:teuthology.orchestra.run.vm02.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T23:26:51.531 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T23:26:51.531 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T23:26:51.531 INFO:teuthology.orchestra.run.vm02.stdout: python3-waitress python3-webob python3-websocket python3-webtest 2026-03-08T23:26:51.532 INFO:teuthology.orchestra.run.vm02.stdout: python3-werkzeug python3-zc.lockfile 2026-03-08T23:26:51.532 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:26:51.547 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-08T23:26:51.547 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-08T23:26:51.549 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-k8sevents* 2026-03-08T23:26:51.701 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:26:51.701 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core ceph-mon libboost-iostreams1.74.0 2026-03-08T23:26:51.702 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libpmemobj1 python-asyncssh-doc python-pastedeploy-tpl 2026-03-08T23:26:51.703 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh python3-cachetools python3-cheroot python3-cherrypy3 2026-03-08T23:26:51.703 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:26:51.703 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:26:51.703 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:26:51.703 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:26:51.703 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-psutil python3-pyinotify python3-repoze.lru 2026-03-08T23:26:51.703 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T23:26:51.703 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T23:26:51.703 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T23:26:51.703 INFO:teuthology.orchestra.run.vm04.stdout: python3-waitress python3-webob python3-websocket python3-webtest 2026-03-08T23:26:51.703 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug python3-zc.lockfile 2026-03-08T23:26:51.703 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:26:51.710 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:26:51.711 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core ceph-mon libboost-iostreams1.74.0 2026-03-08T23:26:51.712 INFO:teuthology.orchestra.run.vm10.stdout: libboost-thread1.74.0 libpmemobj1 python-asyncssh-doc python-pastedeploy-tpl 2026-03-08T23:26:51.712 INFO:teuthology.orchestra.run.vm10.stdout: python3-asyncssh python3-cachetools python3-cheroot python3-cherrypy3 2026-03-08T23:26:51.712 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:26:51.712 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:26:51.712 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:26:51.712 INFO:teuthology.orchestra.run.vm10.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:26:51.712 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend python3-psutil python3-pyinotify python3-repoze.lru 2026-03-08T23:26:51.712 INFO:teuthology.orchestra.run.vm10.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T23:26:51.712 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T23:26:51.712 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T23:26:51.712 INFO:teuthology.orchestra.run.vm10.stdout: python3-waitress python3-webob python3-websocket python3-webtest 2026-03-08T23:26:51.713 INFO:teuthology.orchestra.run.vm10.stdout: python3-werkzeug python3-zc.lockfile 2026-03-08T23:26:51.713 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:26:51.721 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-08T23:26:51.721 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-08T23:26:51.723 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-k8sevents* 2026-03-08T23:26:51.735 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-08T23:26:51.735 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-08T23:26:51.737 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-k8sevents* 2026-03-08T23:26:51.761 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-08T23:26:51.761 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 165 MB disk space will be freed. 2026-03-08T23:26:51.807 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118614 files and directories currently installed.) 2026-03-08T23:26:51.810 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:51.821 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:51.849 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:51.894 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:51.935 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-08T23:26:51.935 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 165 MB disk space will be freed. 2026-03-08T23:26:51.953 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-08T23:26:51.953 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 165 MB disk space will be freed. 2026-03-08T23:26:51.977 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118614 files and directories currently installed.) 2026-03-08T23:26:51.979 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:51.997 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:51.997 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118614 files and directories currently installed.) 2026-03-08T23:26:52.000 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:52.013 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:52.026 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:52.042 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:52.067 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:52.084 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:52.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:51 vm04 bash[19918]: cluster 2026-03-08T23:26:50.261619+0000 mgr.x (mgr.14150) 466 : cluster [DBG] pgmap v373: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 5 op/s 2026-03-08T23:26:52.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:51 vm04 bash[19918]: cluster 2026-03-08T23:26:50.261619+0000 mgr.x (mgr.14150) 466 : cluster [DBG] pgmap v373: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 5 op/s 2026-03-08T23:26:52.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:51 vm04 bash[19918]: audit 2026-03-08T23:26:51.105086+0000 mgr.x (mgr.14150) 467 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:52.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:51 vm04 bash[19918]: audit 2026-03-08T23:26:51.105086+0000 mgr.x (mgr.14150) 467 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:52.137 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:51 vm02 bash[17457]: cluster 2026-03-08T23:26:50.261619+0000 mgr.x (mgr.14150) 466 : cluster [DBG] pgmap v373: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 5 op/s 2026-03-08T23:26:52.138 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:51 vm02 bash[17457]: cluster 2026-03-08T23:26:50.261619+0000 mgr.x (mgr.14150) 466 : cluster [DBG] pgmap v373: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 5 op/s 2026-03-08T23:26:52.138 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:51 vm02 bash[17457]: audit 2026-03-08T23:26:51.105086+0000 mgr.x (mgr.14150) 467 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:52.138 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:51 vm02 bash[17457]: audit 2026-03-08T23:26:51.105086+0000 mgr.x (mgr.14150) 467 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:52.138 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.138 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.138 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.138 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.138 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:51 vm10 bash[20034]: cluster 2026-03-08T23:26:50.261619+0000 mgr.x (mgr.14150) 466 : cluster [DBG] pgmap v373: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 5 op/s 2026-03-08T23:26:52.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:51 vm10 bash[20034]: cluster 2026-03-08T23:26:50.261619+0000 mgr.x (mgr.14150) 466 : cluster [DBG] pgmap v373: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 4.0 KiB/s rd, 5 op/s 2026-03-08T23:26:52.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:51 vm10 bash[20034]: audit 2026-03-08T23:26:51.105086+0000 mgr.x (mgr.14150) 467 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:52.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:51 vm10 bash[20034]: audit 2026-03-08T23:26:51.105086+0000 mgr.x (mgr.14150) 467 : audit [DBG] from='client.24370 -' entity='client.iscsi.iscsi.b' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:26:52.394 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.394 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.394 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.395 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.507 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118030 files and directories currently installed.) 2026-03-08T23:26:52.510 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:52.560 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118030 files and directories currently installed.) 2026-03-08T23:26:52.562 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:52.593 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.593 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.593 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.593 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.593 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.593 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.593 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.593 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.657 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.657 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.657 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.657 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.657 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.657 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.657 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.657 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.691 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118030 files and directories currently installed.) 2026-03-08T23:26:52.694 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:52.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.874 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.874 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.874 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.894 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.894 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:52.895 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.079 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.079 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.079 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.079 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.079 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:52 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.374 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.374 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.374 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:52 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.394 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.394 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.394 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:52 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.407 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.407 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.407 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:53.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:53 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:53 vm02 bash[17457]: cluster 2026-03-08T23:26:52.261887+0000 mgr.x (mgr.14150) 468 : cluster [DBG] pgmap v374: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-08T23:26:54.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:53 vm02 bash[17457]: cluster 2026-03-08T23:26:52.261887+0000 mgr.x (mgr.14150) 468 : cluster [DBG] pgmap v374: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-08T23:26:54.156 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:53 vm10 bash[20034]: cluster 2026-03-08T23:26:52.261887+0000 mgr.x (mgr.14150) 468 : cluster [DBG] pgmap v374: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-08T23:26:54.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:53 vm10 bash[20034]: cluster 2026-03-08T23:26:52.261887+0000 mgr.x (mgr.14150) 468 : cluster [DBG] pgmap v374: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-08T23:26:54.194 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:26:54.229 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:26:54.250 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:26:54.253 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:53 vm04 bash[19918]: cluster 2026-03-08T23:26:52.261887+0000 mgr.x (mgr.14150) 468 : cluster [DBG] pgmap v374: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-08T23:26:54.253 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:53 vm04 bash[19918]: cluster 2026-03-08T23:26:52.261887+0000 mgr.x (mgr.14150) 468 : cluster [DBG] pgmap v374: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-08T23:26:54.256 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:26:54.286 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:26:54.294 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:26:54.433 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:26:54.433 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:26:54.442 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:26:54.442 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:26:54.462 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:26:54.463 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:26:54.544 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:26:54.544 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:26:54.544 INFO:teuthology.orchestra.run.vm10.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-08T23:26:54.544 INFO:teuthology.orchestra.run.vm10.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:26:54.544 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:26:54.544 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:26:54.544 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:26:54.544 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:26:54.545 INFO:teuthology.orchestra.run.vm10.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:26:54.545 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:26:54.545 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:26:54.545 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:26:54.545 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:26:54.545 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:26:54.545 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:26:54.545 INFO:teuthology.orchestra.run.vm10.stdout: smartmontools socat xmlstarlet 2026-03-08T23:26:54.545 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:26:54.552 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-08T23:26:54.552 INFO:teuthology.orchestra.run.vm10.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-08T23:26:54.590 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:26:54.590 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:26:54.591 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-08T23:26:54.591 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:26:54.591 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:26:54.591 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:26:54.591 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:26:54.591 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:26:54.591 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:26:54.591 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:26:54.591 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:26:54.591 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:26:54.591 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:26:54.591 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:26:54.591 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:26:54.591 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat xmlstarlet 2026-03-08T23:26:54.591 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:26:54.598 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-08T23:26:54.598 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-08T23:26:54.679 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:26:54.679 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:26:54.679 INFO:teuthology.orchestra.run.vm02.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-08T23:26:54.679 INFO:teuthology.orchestra.run.vm02.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:26:54.679 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:26:54.679 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:26:54.679 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:26:54.679 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:26:54.679 INFO:teuthology.orchestra.run.vm02.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:26:54.679 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:26:54.679 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:26:54.679 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:26:54.679 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:26:54.680 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:26:54.680 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:26:54.680 INFO:teuthology.orchestra.run.vm02.stdout: smartmontools socat xmlstarlet 2026-03-08T23:26:54.680 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:26:54.687 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-08T23:26:54.687 INFO:teuthology.orchestra.run.vm02.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-08T23:26:54.744 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-08T23:26:54.744 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 472 MB disk space will be freed. 2026-03-08T23:26:54.793 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118030 files and directories currently installed.) 2026-03-08T23:26:54.796 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:54.800 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-08T23:26:54.800 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 472 MB disk space will be freed. 2026-03-08T23:26:54.847 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118030 files and directories currently installed.) 2026-03-08T23:26:54.849 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:54.867 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-08T23:26:54.867 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 472 MB disk space will be freed. 2026-03-08T23:26:54.905 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:54.924 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118030 files and directories currently installed.) 2026-03-08T23:26:54.924 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:54.968 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:55.012 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:55.343 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.344 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.344 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.344 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.344 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.374 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.374 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.374 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.394 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.394 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.394 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.394 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.452 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:55.480 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:55.540 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:55.615 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.615 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.615 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.615 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.615 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.628 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.628 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.628 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.628 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.710 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.710 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.710 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.710 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.711 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.884 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.884 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.884 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.885 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.885 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.896 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.896 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.896 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.896 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.898 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.898 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.898 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.898 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:55 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:55.934 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:55.985 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:56.049 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:56.076 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.076 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.076 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.076 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.076 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.076 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:55 vm02 bash[17457]: cluster 2026-03-08T23:26:54.262186+0000 mgr.x (mgr.14150) 469 : cluster [DBG] pgmap v375: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-08T23:26:56.076 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:55 vm02 bash[17457]: cluster 2026-03-08T23:26:54.262186+0000 mgr.x (mgr.14150) 469 : cluster [DBG] pgmap v375: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-08T23:26:56.076 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.076 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.076 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.076 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.076 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:55 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:55 vm10 bash[20034]: cluster 2026-03-08T23:26:54.262186+0000 mgr.x (mgr.14150) 469 : cluster [DBG] pgmap v375: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-08T23:26:56.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:55 vm10 bash[20034]: cluster 2026-03-08T23:26:54.262186+0000 mgr.x (mgr.14150) 469 : cluster [DBG] pgmap v375: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-08T23:26:56.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.157 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.157 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.157 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.157 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:55 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.198 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:55 vm04 bash[19918]: cluster 2026-03-08T23:26:54.262186+0000 mgr.x (mgr.14150) 469 : cluster [DBG] pgmap v375: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-08T23:26:56.198 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:55 vm04 bash[19918]: cluster 2026-03-08T23:26:54.262186+0000 mgr.x (mgr.14150) 469 : cluster [DBG] pgmap v375: 4 pgs: 4 active+clean; 449 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-08T23:26:56.198 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.198 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.198 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.198 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.352 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.352 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.352 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.352 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.352 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.441 INFO:teuthology.orchestra.run.vm04.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:56.474 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.474 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.474 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.497 INFO:teuthology.orchestra.run.vm10.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:56.519 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.519 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.519 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.519 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.519 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.519 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.519 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.519 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.519 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.519 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.578 INFO:teuthology.orchestra.run.vm02.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:56.624 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.624 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.624 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.625 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.625 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.826 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.827 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.827 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.827 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.888 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.888 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.888 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.888 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.888 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.894 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.894 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.894 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:56 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:56.926 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:56.968 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:56.989 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:57.029 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:57.074 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:57.091 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.091 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: Stopping Ceph osd.2 for 91105a84-1b44-11f1-9a43-e95894f13987... 2026-03-08T23:26:57.092 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.092 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: Stopping Ceph osd.3 for 91105a84-1b44-11f1-9a43-e95894f13987... 2026-03-08T23:26:57.092 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:57 vm04 bash[28230]: debug 2026-03-08T23:26:57.071+0000 7fad7ab4e640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:26:57.092 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:57 vm04 bash[28230]: debug 2026-03-08T23:26:57.071+0000 7fad7ab4e640 -1 osd.3 65 *** Got signal Terminated *** 2026-03-08T23:26:57.092 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:57 vm04 bash[28230]: debug 2026-03-08T23:26:57.071+0000 7fad7ab4e640 -1 osd.3 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-08T23:26:57.092 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.092 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: Stopping Ceph osd.4 for 91105a84-1b44-11f1-9a43-e95894f13987... 2026-03-08T23:26:57.092 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.092 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:56 vm04 systemd[1]: Stopping Ceph mon.b for 91105a84-1b44-11f1-9a43-e95894f13987... 2026-03-08T23:26:57.124 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:57.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: Stopping Ceph mon.c for 91105a84-1b44-11f1-9a43-e95894f13987... 2026-03-08T23:26:57.157 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.157 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: Stopping Ceph osd.5 for 91105a84-1b44-11f1-9a43-e95894f13987... 2026-03-08T23:26:57.157 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.157 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: Stopping Ceph osd.6 for 91105a84-1b44-11f1-9a43-e95894f13987... 2026-03-08T23:26:57.158 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.158 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: Stopping Ceph osd.7 for 91105a84-1b44-11f1-9a43-e95894f13987... 2026-03-08T23:26:57.158 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:56 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.158 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: Stopping Ceph iscsi.iscsi.b for 91105a84-1b44-11f1-9a43-e95894f13987... 2026-03-08T23:26:57.273 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.273 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: Stopping Ceph osd.0 for 91105a84-1b44-11f1-9a43-e95894f13987... 2026-03-08T23:26:57.274 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.274 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: Stopping Ceph osd.1 for 91105a84-1b44-11f1-9a43-e95894f13987... 2026-03-08T23:26:57.274 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.274 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: Stopping Ceph mon.a for 91105a84-1b44-11f1-9a43-e95894f13987... 2026-03-08T23:26:57.274 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.274 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: Stopping Ceph mgr.x for 91105a84-1b44-11f1-9a43-e95894f13987... 2026-03-08T23:26:57.274 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.274 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: Stopping Ceph iscsi.iscsi.a for 91105a84-1b44-11f1-9a43-e95894f13987... 2026-03-08T23:26:57.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:57 vm04 bash[19918]: debug 2026-03-08T23:26:57.091+0000 7f4e32ba6640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-08T23:26:57.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:57 vm04 bash[19918]: debug 2026-03-08T23:26:57.091+0000 7f4e32ba6640 -1 mon.b@2(peon) e3 *** Got Signal Terminated *** 2026-03-08T23:26:57.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:57 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.374 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:57 vm04 bash[22499]: debug 2026-03-08T23:26:57.087+0000 7fbedb4ae640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:26:57.374 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:57 vm04 bash[22499]: debug 2026-03-08T23:26:57.087+0000 7fbedb4ae640 -1 osd.2 65 *** Got signal Terminated *** 2026-03-08T23:26:57.374 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:57 vm04 bash[22499]: debug 2026-03-08T23:26:57.087+0000 7fbedb4ae640 -1 osd.2 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-08T23:26:57.374 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:57 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.374 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:57 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.375 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:57 vm04 bash[34211]: debug 2026-03-08T23:26:57.123+0000 7fc65f284640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:26:57.375 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:57 vm04 bash[34211]: debug 2026-03-08T23:26:57.123+0000 7fc65f284640 -1 osd.4 65 *** Got signal Terminated *** 2026-03-08T23:26:57.375 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:57 vm04 bash[34211]: debug 2026-03-08T23:26:57.123+0000 7fc65f284640 -1 osd.4 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-08T23:26:57.375 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:57 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.532 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:57 vm10 bash[20034]: debug 2026-03-08T23:26:57.193+0000 7f92f16f3640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-08T23:26:57.532 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:57 vm10 bash[20034]: debug 2026-03-08T23:26:57.193+0000 7f92f16f3640 -1 mon.c@1(peon) e3 *** Got Signal Terminated *** 2026-03-08T23:26:57.532 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.532 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:57 vm10 bash[22952]: debug 2026-03-08T23:26:57.229+0000 7fe198f99640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:26:57.532 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:57 vm10 bash[22952]: debug 2026-03-08T23:26:57.229+0000 7fe198f99640 -1 osd.5 65 *** Got signal Terminated *** 2026-03-08T23:26:57.532 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:57 vm10 bash[22952]: debug 2026-03-08T23:26:57.229+0000 7fe198f99640 -1 osd.5 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-08T23:26:57.532 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.533 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:57 vm10 bash[29057]: debug 2026-03-08T23:26:57.269+0000 7fdf3199f640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:26:57.533 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:57 vm10 bash[29057]: debug 2026-03-08T23:26:57.269+0000 7fdf3199f640 -1 osd.6 65 *** Got signal Terminated *** 2026-03-08T23:26:57.533 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:57 vm10 bash[29057]: debug 2026-03-08T23:26:57.269+0000 7fdf3199f640 -1 osd.6 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-08T23:26:57.533 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.533 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:57 vm10 bash[35212]: debug 2026-03-08T23:26:57.181+0000 7fd19f7c4640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:26:57.533 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:57 vm10 bash[35212]: debug 2026-03-08T23:26:57.181+0000 7fd19f7c4640 -1 osd.7 65 *** Got signal Terminated *** 2026-03-08T23:26:57.533 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:57 vm10 bash[35212]: debug 2026-03-08T23:26:57.181+0000 7fd19f7c4640 -1 osd.7 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-08T23:26:57.533 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.533 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:57 vm10 bash[39354]: debug Shutdown received 2026-03-08T23:26:57.533 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:57 vm10 bash[39354]: debug No gateway configuration to remove on this host (vm10.local) 2026-03-08T23:26:57.533 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.648 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:57 vm02 bash[17457]: debug 2026-03-08T23:26:57.352+0000 7f7982550640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-08T23:26:57.651 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:57 vm02 bash[17457]: debug 2026-03-08T23:26:57.352+0000 7f7982550640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-08T23:26:57.651 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.651 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.651 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.651 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.651 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.651 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:57 vm02 bash[27574]: debug 2026-03-08T23:26:57.416+0000 7f784d684640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:26:57.651 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:57 vm02 bash[27574]: debug 2026-03-08T23:26:57.416+0000 7f784d684640 -1 osd.0 65 *** Got signal Terminated *** 2026-03-08T23:26:57.651 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:57 vm02 bash[27574]: debug 2026-03-08T23:26:57.416+0000 7f784d684640 -1 osd.0 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-08T23:26:57.652 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:57 vm02 bash[33499]: debug 2026-03-08T23:26:57.332+0000 7fa5a3075640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:26:57.652 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:57 vm02 bash[33499]: debug 2026-03-08T23:26:57.332+0000 7fa5a3075640 -1 osd.1 65 *** Got signal Terminated *** 2026-03-08T23:26:57.652 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:57 vm02 bash[33499]: debug 2026-03-08T23:26:57.332+0000 7fa5a3075640 -1 osd.1 65 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-08T23:26:57.652 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.652 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.652 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:57 vm02 bash[37757]: debug Shutdown received 2026-03-08T23:26:57.652 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:57 vm02 bash[37757]: debug No gateway configuration to remove on this host (vm02.local) 2026-03-08T23:26:57.652 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.799 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.799 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.799 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:57 vm10 bash[44343]: ceph-91105a84-1b44-11f1-9a43-e95894f13987-mon-c 2026-03-08T23:26:57.799 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.799 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.799 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.862 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:57 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.862 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:57 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.862 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:57 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.862 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:57 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.870 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:26:57.879 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:26:57.886 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:26:57.898 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:57 vm02 bash[49387]: ceph-91105a84-1b44-11f1-9a43-e95894f13987-mon-a 2026-03-08T23:26:57.898 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:57 vm02 bash[49358]: ceph-91105a84-1b44-11f1-9a43-e95894f13987-mgr-x 2026-03-08T23:26:57.898 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:57 vm02 bash[49344]: ceph-91105a84-1b44-11f1-9a43-e95894f13987-osd-1 2026-03-08T23:26:57.899 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.899 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:57 vm02 bash[49370]: ceph-91105a84-1b44-11f1-9a43-e95894f13987-iscsi-iscsi-a 2026-03-08T23:26:57.899 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:57.935 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:26:57.942 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:26:57.982 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:26:58.001 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117550 files and directories currently installed.) 2026-03-08T23:26:58.003 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:58.012 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117550 files and directories currently installed.) 2026-03-08T23:26:58.014 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:58.088 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117550 files and directories currently installed.) 2026-03-08T23:26:58.090 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:58.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:57 vm04 bash[42310]: ceph-91105a84-1b44-11f1-9a43-e95894f13987-mon-b 2026-03-08T23:26:58.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:57 vm04 systemd[1]: ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.b.service: Deactivated successfully. 2026-03-08T23:26:58.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:57 vm04 systemd[1]: Stopped Ceph mon.b for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:26:58.145 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:57 vm10 bash[44540]: Error response from daemon: No such container: ceph-91105a84-1b44-11f1-9a43-e95894f13987-mon-c 2026-03-08T23:26:58.145 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.c.service: Deactivated successfully. 2026-03-08T23:26:58.145 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: Stopped Ceph mon.c for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:26:58.145 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:57 vm10 bash[44373]: ceph-91105a84-1b44-11f1-9a43-e95894f13987-iscsi-iscsi-b 2026-03-08T23:26:58.145 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:57 vm10 bash[44574]: Error response from daemon: No such container: ceph-91105a84-1b44-11f1-9a43-e95894f13987-iscsi-iscsi-b 2026-03-08T23:26:58.145 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: ceph-91105a84-1b44-11f1-9a43-e95894f13987@iscsi.iscsi.b.service: Deactivated successfully. 2026-03-08T23:26:58.145 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:57 vm10 systemd[1]: Stopped Ceph iscsi.iscsi.b for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:26:58.238 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: ceph-91105a84-1b44-11f1-9a43-e95894f13987@mgr.x.service: Deactivated successfully. 2026-03-08T23:26:58.239 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: Stopped Ceph mgr.x for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:26:58.239 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: ceph-91105a84-1b44-11f1-9a43-e95894f13987@iscsi.iscsi.a.service: Deactivated successfully. 2026-03-08T23:26:58.239 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:57 vm02 systemd[1]: Stopped Ceph iscsi.iscsi.a for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:26:58.239 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.a.service: Deactivated successfully. 2026-03-08T23:26:58.239 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: Stopped Ceph mon.a for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:26:58.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.407 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.407 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.407 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.407 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.407 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.407 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.458 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.458 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.459 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.459 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.459 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.459 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.459 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.459 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.536 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.536 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.536 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.536 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.537 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.537 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.537 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.537 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.537 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.537 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.637 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:58.665 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:58.754 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:58.781 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.781 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.781 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.781 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.781 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.790 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.791 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.791 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.791 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.791 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.1.service: Deactivated successfully. 2026-03-08T23:26:58.791 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: Stopped Ceph osd.1 for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:26:58.791 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.816 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.816 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.816 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:58.816 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.038 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.038 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.038 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.038 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.038 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.039 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:58 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.072 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.072 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.072 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.073 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:58 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.073 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:59 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.126 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:59.144 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.144 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.144 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:58 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.154 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:59.223 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:59.295 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.296 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.296 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.296 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.328 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:59 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.328 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:59 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.328 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:59 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.328 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:59 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.437 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.437 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.437 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.437 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.437 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.437 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.437 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.437 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.437 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.437 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.546 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.546 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.546 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.546 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.546 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.546 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.546 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.546 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.546 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.547 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.617 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:59.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:59 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:59 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.624 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:59 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.624 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:59 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.624 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:59 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.624 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:59 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.624 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:59 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.625 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:59.668 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:26:59.713 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.714 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.714 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.714 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.714 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.802 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.998 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:26:59 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.998 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:26:59 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.998 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:26:59 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:26:59.998 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:26:59 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.064 INFO:teuthology.orchestra.run.vm04.stdout:dpkg: warning: while removing ceph-common, directory '/var/lib/ceph' not empty so not removed 2026-03-08T23:27:00.072 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:00.088 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.088 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.088 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.088 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.088 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:26:59 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.113 INFO:teuthology.orchestra.run.vm10.stdout:dpkg: warning: while removing ceph-common, directory '/var/lib/ceph' not empty so not removed 2026-03-08T23:27:00.127 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:00.156 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.156 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:00 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.156 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.156 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:27:00 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.156 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.156 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:00 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.156 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:00 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.156 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:26:59 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.156 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:00 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.160 INFO:teuthology.orchestra.run.vm02.stdout:dpkg: warning: while removing ceph-common, directory '/var/lib/ceph' not empty so not removed 2026-03-08T23:27:00.169 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:00.311 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:00 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.311 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:00 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.312 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:00 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.312 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:00 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.312 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:27:00 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.312 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:27:00 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.312 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:00 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.312 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:00 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.340 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:27:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.341 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:27:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.341 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.341 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:27:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.341 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:27:00 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.407 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:00 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.407 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:00 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.407 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:00 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:00 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:27:00 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.624 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:00 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.624 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:00 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.624 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:00 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.645 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:27:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.645 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:27:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.645 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:27:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.645 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:27:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.646 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.646 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.646 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:27:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.646 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:27:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.646 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.646 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:00 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:27:00 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.907 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:00 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.907 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:00 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.907 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:00 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:00.907 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:00 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:01.682 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:01.722 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:27:01.797 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:01.837 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:27:01.977 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:27:01.978 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:27:01.989 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:02.026 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:27:02.086 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:27:02.087 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:27:02.260 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:02.260 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:02.261 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-08T23:27:02.261 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:27:02.261 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:02.261 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:02.261 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:02.261 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:02.261 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:02.261 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:02.261 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:02.261 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:02.261 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:02.261 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:02.261 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:02.261 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat xmlstarlet 2026-03-08T23:27:02.261 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:02.273 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:27:02.273 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:27:02.350 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-08T23:27:02.350 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse* 2026-03-08T23:27:02.413 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:02.413 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:02.414 INFO:teuthology.orchestra.run.vm10.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-08T23:27:02.414 INFO:teuthology.orchestra.run.vm10.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:27:02.414 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:02.414 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:02.414 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:02.414 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:02.414 INFO:teuthology.orchestra.run.vm10.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:02.414 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:02.414 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:02.414 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:02.414 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:02.414 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:02.414 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:02.414 INFO:teuthology.orchestra.run.vm10.stdout: smartmontools socat xmlstarlet 2026-03-08T23:27:02.414 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:02.434 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-08T23:27:02.435 INFO:teuthology.orchestra.run.vm10.stdout: ceph-fuse* 2026-03-08T23:27:02.487 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:02 vm04 bash[42301]: ceph-91105a84-1b44-11f1-9a43-e95894f13987-osd-2 2026-03-08T23:27:02.487 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:02 vm04 bash[42299]: ceph-91105a84-1b44-11f1-9a43-e95894f13987-osd-3 2026-03-08T23:27:02.488 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:02 vm04 bash[42333]: ceph-91105a84-1b44-11f1-9a43-e95894f13987-osd-4 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout: smartmontools socat xmlstarlet 2026-03-08T23:27:02.561 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:02.581 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-08T23:27:02.581 INFO:teuthology.orchestra.run.vm02.stdout: ceph-fuse* 2026-03-08T23:27:02.671 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:27:02.671 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-08T23:27:02.700 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:27:02.700 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-08T23:27:02.716 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:02 vm02 bash[49453]: ceph-91105a84-1b44-11f1-9a43-e95894f13987-osd-0 2026-03-08T23:27:02.725 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117536 files and directories currently installed.) 2026-03-08T23:27:02.730 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:02.747 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:02 vm04 bash[43015]: Error response from daemon: No such container: ceph-91105a84-1b44-11f1-9a43-e95894f13987-osd-4 2026-03-08T23:27:02.792 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:27:02.792 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-08T23:27:02.803 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117536 files and directories currently installed.) 2026-03-08T23:27:02.808 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:02 vm10 bash[44347]: ceph-91105a84-1b44-11f1-9a43-e95894f13987-osd-5 2026-03-08T23:27:02.808 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:02 vm10 bash[45184]: Error response from daemon: No such container: ceph-91105a84-1b44-11f1-9a43-e95894f13987-osd-5 2026-03-08T23:27:02.808 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:02 vm10 bash[44374]: ceph-91105a84-1b44-11f1-9a43-e95894f13987-osd-6 2026-03-08T23:27:02.808 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:02 vm10 bash[45171]: Error response from daemon: No such container: ceph-91105a84-1b44-11f1-9a43-e95894f13987-osd-6 2026-03-08T23:27:02.808 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:02 vm10 bash[44342]: ceph-91105a84-1b44-11f1-9a43-e95894f13987-osd-7 2026-03-08T23:27:02.808 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:02.858 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117536 files and directories currently installed.) 2026-03-08T23:27:02.861 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:02.997 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:27:02 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:02.998 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:02 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:02.998 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:02 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.000 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:02 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.084 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.084 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.084 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.084 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:27:02 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.157 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:02 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.157 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:02 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.157 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:02 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.157 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:02 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.262 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.263 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.263 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.263 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.285 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:27:03.299 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:27:03.323 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:27:03.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.394 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.395 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.395 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.0.service: Deactivated successfully. 2026-03-08T23:27:03.395 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: Stopped Ceph osd.0 for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:27:03.395 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.395 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.442 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117527 files and directories currently installed.) 2026-03-08T23:27:03.445 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:03.458 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117527 files and directories currently installed.) 2026-03-08T23:27:03.461 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:03.462 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.462 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.462 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.462 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.462 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.7.service: Deactivated successfully. 2026-03-08T23:27:03.462 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: Stopped Ceph osd.7 for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:27:03.462 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.485 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117527 files and directories currently installed.) 2026-03-08T23:27:03.495 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:03.609 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.3.service: Deactivated successfully. 2026-03-08T23:27:03.610 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: Stopped Ceph osd.3 for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:27:03.610 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.2.service: Deactivated successfully. 2026-03-08T23:27:03.610 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: Stopped Ceph osd.2 for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:27:03.610 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.4.service: Deactivated successfully. 2026-03-08T23:27:03.610 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: Stopped Ceph osd.4 for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:27:03.714 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.714 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.714 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.6.service: Deactivated successfully. 2026-03-08T23:27:03.714 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: Stopped Ceph osd.6 for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:27:03.714 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.714 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.714 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.824 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.824 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.824 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.824 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.824 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.862 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.862 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.862 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.862 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:03.862 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:04.124 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:04.124 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:04.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:27:03 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:04.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:04.144 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:04.144 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:04.144 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:04.144 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:03 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:04.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:04.157 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.5.service: Deactivated successfully. 2026-03-08T23:27:04.157 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: Stopped Ceph osd.5 for 91105a84-1b44-11f1-9a43-e95894f13987. 2026-03-08T23:27:04.157 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:04.158 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:04.158 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:04.158 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:03 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:05.150 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:05.184 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:05.188 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:05.188 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:27:05.223 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:27:05.227 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:27:05.405 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:27:05.406 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:27:05.427 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:27:05.428 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:27:05.439 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:27:05.440 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:27:05.641 INFO:teuthology.orchestra.run.vm02.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-08T23:27:05.641 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:05.641 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:05.642 INFO:teuthology.orchestra.run.vm02.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-08T23:27:05.642 INFO:teuthology.orchestra.run.vm02.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:27:05.642 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:05.642 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:05.642 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:05.642 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:05.642 INFO:teuthology.orchestra.run.vm02.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:05.642 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:05.642 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:05.642 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:05.642 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:05.642 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:05.642 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:05.642 INFO:teuthology.orchestra.run.vm02.stdout: smartmontools socat xmlstarlet 2026-03-08T23:27:05.642 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:05.662 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:05.662 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:05.696 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:27:05.706 INFO:teuthology.orchestra.run.vm04.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-08T23:27:05.706 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:05.706 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:05.707 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-08T23:27:05.707 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:27:05.708 INFO:teuthology.orchestra.run.vm10.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-08T23:27:05.708 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:05.708 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:05.708 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:05.708 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:05.708 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:05.708 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:05.708 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:05.708 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:05.708 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:05.708 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:05.708 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:05.708 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:05.708 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:05.708 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat xmlstarlet 2026-03-08T23:27:05.708 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:05.709 INFO:teuthology.orchestra.run.vm10.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-08T23:27:05.709 INFO:teuthology.orchestra.run.vm10.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:27:05.710 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:05.710 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:05.710 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:05.710 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:05.710 INFO:teuthology.orchestra.run.vm10.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:05.710 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:05.710 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:05.710 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:05.710 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:05.710 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:05.710 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:05.710 INFO:teuthology.orchestra.run.vm10.stdout: smartmontools socat xmlstarlet 2026-03-08T23:27:05.710 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:05.736 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:05.736 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:05.745 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:05.745 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:05.770 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:27:05.780 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:27:05.906 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:27:05.907 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:27:05.947 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:27:05.948 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:27:05.989 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:27:05.990 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:27:06.100 INFO:teuthology.orchestra.run.vm02.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-08T23:27:06.100 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:06.100 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:06.101 INFO:teuthology.orchestra.run.vm02.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-08T23:27:06.101 INFO:teuthology.orchestra.run.vm02.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:27:06.101 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:06.101 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:06.101 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:06.101 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:06.101 INFO:teuthology.orchestra.run.vm02.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:06.101 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:06.101 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:06.101 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:06.101 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:06.101 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:06.101 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:06.101 INFO:teuthology.orchestra.run.vm02.stdout: smartmontools socat xmlstarlet 2026-03-08T23:27:06.101 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:06.126 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:06.126 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:06.162 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:27:06.191 INFO:teuthology.orchestra.run.vm10.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-08T23:27:06.191 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:06.191 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:06.192 INFO:teuthology.orchestra.run.vm10.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-08T23:27:06.193 INFO:teuthology.orchestra.run.vm10.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:27:06.193 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:06.193 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:06.193 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:06.193 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:06.193 INFO:teuthology.orchestra.run.vm10.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:06.193 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:06.193 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:06.193 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:06.193 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:06.193 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:06.193 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:06.193 INFO:teuthology.orchestra.run.vm10.stdout: smartmontools socat xmlstarlet 2026-03-08T23:27:06.193 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:06.220 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:06.220 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:06.240 INFO:teuthology.orchestra.run.vm04.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-08T23:27:06.240 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:06.240 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:06.241 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-08T23:27:06.241 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:27:06.241 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:06.241 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:06.241 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:06.241 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:06.241 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:06.241 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:06.241 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:06.241 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:06.242 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:06.242 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:06.242 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:06.242 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat xmlstarlet 2026-03-08T23:27:06.242 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:06.254 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:27:06.269 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:06.269 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:06.302 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:27:06.385 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:27:06.386 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:27:06.480 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:27:06.480 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:27:06.531 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:27:06.531 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:27:06.603 INFO:teuthology.orchestra.run.vm02.stdout:Package 'radosgw' is not installed, so not removed 2026-03-08T23:27:06.603 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:06.603 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:06.604 INFO:teuthology.orchestra.run.vm02.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-08T23:27:06.604 INFO:teuthology.orchestra.run.vm02.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:27:06.604 INFO:teuthology.orchestra.run.vm02.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:06.604 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:06.604 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:06.604 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:06.604 INFO:teuthology.orchestra.run.vm02.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:06.604 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:06.604 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:06.604 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:06.604 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:06.604 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:06.604 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:06.604 INFO:teuthology.orchestra.run.vm02.stdout: smartmontools socat xmlstarlet 2026-03-08T23:27:06.604 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:06.632 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:06.632 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:06.665 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:27:06.737 INFO:teuthology.orchestra.run.vm10.stdout:Package 'radosgw' is not installed, so not removed 2026-03-08T23:27:06.737 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:06.737 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:06.738 INFO:teuthology.orchestra.run.vm10.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-08T23:27:06.738 INFO:teuthology.orchestra.run.vm10.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:27:06.738 INFO:teuthology.orchestra.run.vm10.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:06.738 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:06.738 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:06.738 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:06.738 INFO:teuthology.orchestra.run.vm10.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:06.738 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:06.738 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:06.738 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:06.738 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:06.738 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:06.738 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:06.738 INFO:teuthology.orchestra.run.vm10.stdout: smartmontools socat xmlstarlet 2026-03-08T23:27:06.738 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:06.771 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:06.771 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:06.782 INFO:teuthology.orchestra.run.vm04.stdout:Package 'radosgw' is not installed, so not removed 2026-03-08T23:27:06.782 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:06.782 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:06.783 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 libsqlite3-mod-ceph 2026-03-08T23:27:06.783 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:27:06.784 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:06.784 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:06.784 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:06.784 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:06.784 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:06.784 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:06.784 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:06.784 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:06.784 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:06.784 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:06.784 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:06.784 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat xmlstarlet 2026-03-08T23:27:06.784 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:06.804 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:27:06.814 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:06.815 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:06.854 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:27:06.885 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:27:06.886 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:27:06.963 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:27:06.963 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:27:07.031 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:27:07.032 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:27:07.123 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:07.123 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:07.123 INFO:teuthology.orchestra.run.vm02.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:07.124 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:07.124 INFO:teuthology.orchestra.run.vm02.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:07.124 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:07.124 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:07.124 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:07.124 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:07.124 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:07.124 INFO:teuthology.orchestra.run.vm02.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:07.124 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:07.124 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:07.124 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:07.124 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:07.124 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:07.124 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:07.124 INFO:teuthology.orchestra.run.vm02.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:07.124 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:07.141 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-08T23:27:07.141 INFO:teuthology.orchestra.run.vm02.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-08T23:27:07.181 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:07.181 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:07.181 INFO:teuthology.orchestra.run.vm10.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:07.182 INFO:teuthology.orchestra.run.vm10.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:07.182 INFO:teuthology.orchestra.run.vm10.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:07.182 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:07.182 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:07.182 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:07.182 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:07.182 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:07.182 INFO:teuthology.orchestra.run.vm10.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:07.182 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:07.182 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:07.183 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:07.183 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:07.183 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:07.183 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:07.183 INFO:teuthology.orchestra.run.vm10.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:07.183 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:07.200 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-08T23:27:07.200 INFO:teuthology.orchestra.run.vm10.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-08T23:27:07.252 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:07.253 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:07.253 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:07.253 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:07.254 INFO:teuthology.orchestra.run.vm04.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:07.254 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:07.254 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:07.254 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:07.254 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:07.254 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:07.254 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:07.254 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:07.254 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:07.254 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:07.254 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:07.254 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:07.254 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:07.254 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:07.254 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:07.269 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-08T23:27:07.269 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-08T23:27:07.346 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-08T23:27:07.346 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-08T23:27:07.379 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117527 files and directories currently installed.) 2026-03-08T23:27:07.380 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:07.389 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:07.391 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-08T23:27:07.391 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-08T23:27:07.398 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:07.426 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117527 files and directories currently installed.) 2026-03-08T23:27:07.428 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:07.439 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:07.451 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:07.456 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-08T23:27:07.456 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-08T23:27:07.495 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117527 files and directories currently installed.) 2026-03-08T23:27:07.497 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:07.512 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:07.524 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:08.515 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:08.554 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:27:08.576 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:08.613 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:27:08.676 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:08.713 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:27:08.729 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:27:08.730 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:27:08.814 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:27:08.815 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:27:08.839 INFO:teuthology.orchestra.run.vm10.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-08T23:27:08.839 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:08.839 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:08.839 INFO:teuthology.orchestra.run.vm10.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:08.840 INFO:teuthology.orchestra.run.vm10.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:08.840 INFO:teuthology.orchestra.run.vm10.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:08.840 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:08.840 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:08.840 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:08.840 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:08.840 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:08.840 INFO:teuthology.orchestra.run.vm10.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:08.840 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:08.840 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:08.840 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:08.840 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:08.840 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:08.840 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:08.840 INFO:teuthology.orchestra.run.vm10.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:08.840 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:08.856 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:08.856 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:08.891 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:27:08.956 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:27:08.957 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:27:08.991 INFO:teuthology.orchestra.run.vm04.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-08T23:27:08.991 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:08.991 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:08.991 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:08.991 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:08.992 INFO:teuthology.orchestra.run.vm04.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:08.992 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:08.992 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:08.992 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:08.992 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:08.992 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:08.992 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:08.992 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:08.992 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:08.992 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:08.992 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:08.992 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:08.992 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:08.992 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:08.992 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:09.022 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:09.023 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:09.043 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:27:09.043 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:27:09.058 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:27:09.157 INFO:teuthology.orchestra.run.vm10.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-08T23:27:09.157 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:09.158 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:09.158 INFO:teuthology.orchestra.run.vm10.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:09.158 INFO:teuthology.orchestra.run.vm10.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:09.159 INFO:teuthology.orchestra.run.vm10.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:09.159 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:09.159 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:09.159 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:09.159 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:09.159 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:09.159 INFO:teuthology.orchestra.run.vm10.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:09.159 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:09.159 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:09.159 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:09.159 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:09.159 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:09.159 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:09.159 INFO:teuthology.orchestra.run.vm10.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:09.159 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:09.190 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:09.191 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:09.225 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:27:09.238 INFO:teuthology.orchestra.run.vm02.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-08T23:27:09.238 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:09.238 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:09.239 INFO:teuthology.orchestra.run.vm02.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:09.239 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:09.239 INFO:teuthology.orchestra.run.vm02.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:09.239 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:09.239 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:09.239 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:09.239 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:09.239 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:09.239 INFO:teuthology.orchestra.run.vm02.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:09.239 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:09.239 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:09.239 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:09.239 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:09.239 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:09.240 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:09.240 INFO:teuthology.orchestra.run.vm02.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:09.240 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:09.252 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:27:09.252 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:27:09.269 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:09.269 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:09.303 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:27:09.357 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:27:09.358 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:27:09.454 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:27:09.454 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:27:09.504 INFO:teuthology.orchestra.run.vm04.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-08T23:27:09.504 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:09.504 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:09.504 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:09.505 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:09.505 INFO:teuthology.orchestra.run.vm04.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:09.505 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:09.505 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:09.505 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:09.505 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:09.505 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:09.505 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:09.505 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:09.505 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:09.505 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:09.505 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:09.505 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:09.506 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:09.506 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:09.506 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:09.520 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:09.520 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:09.520 INFO:teuthology.orchestra.run.vm10.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:09.521 INFO:teuthology.orchestra.run.vm10.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:09.521 INFO:teuthology.orchestra.run.vm10.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:09.521 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:09.521 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:09.521 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:09.521 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:09.521 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:09.521 INFO:teuthology.orchestra.run.vm10.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:09.521 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:09.521 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:09.521 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:09.521 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:09.521 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:09.521 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:09.521 INFO:teuthology.orchestra.run.vm10.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:09.521 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:09.532 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:09.532 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:09.536 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-08T23:27:09.536 INFO:teuthology.orchestra.run.vm10.stdout: python3-rbd* 2026-03-08T23:27:09.552 INFO:teuthology.orchestra.run.vm02.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-08T23:27:09.552 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:09.552 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:09.553 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:09.566 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:27:09.578 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:09.578 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:09.622 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:27:09.722 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:27:09.722 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-08T23:27:09.729 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:27:09.729 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:27:09.761 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117503 files and directories currently installed.) 2026-03-08T23:27:09.763 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:09.836 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:27:09.837 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:27:09.906 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:09.906 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:09.906 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:09.907 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:09.908 INFO:teuthology.orchestra.run.vm04.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:09.908 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:09.908 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:09.908 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:09.908 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:09.908 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:09.908 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:09.908 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:09.908 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:09.908 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:09.908 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:09.908 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:09.908 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:09.908 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:09.908 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:09.931 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-08T23:27:09.932 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd* 2026-03-08T23:27:10.056 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:10.056 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:10.056 INFO:teuthology.orchestra.run.vm02.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:10.056 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:10.057 INFO:teuthology.orchestra.run.vm02.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:10.057 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:10.057 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:10.057 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:10.057 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:10.057 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:10.057 INFO:teuthology.orchestra.run.vm02.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:10.057 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:10.057 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:10.057 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:10.057 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:10.057 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:10.057 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:10.057 INFO:teuthology.orchestra.run.vm02.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:10.057 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:10.073 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-08T23:27:10.073 INFO:teuthology.orchestra.run.vm02.stdout: python3-rbd* 2026-03-08T23:27:10.126 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:27:10.126 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-08T23:27:10.165 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117503 files and directories currently installed.) 2026-03-08T23:27:10.168 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:10.255 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:27:10.256 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-08T23:27:10.293 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117503 files and directories currently installed.) 2026-03-08T23:27:10.295 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:10.879 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:10.916 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:27:11.171 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:27:11.171 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:27:11.265 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:11.301 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:27:11.407 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:11.408 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:11.408 INFO:teuthology.orchestra.run.vm10.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:11.408 INFO:teuthology.orchestra.run.vm10.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:11.408 INFO:teuthology.orchestra.run.vm10.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:11.408 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:11.408 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:11.408 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:11.408 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:11.408 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:11.408 INFO:teuthology.orchestra.run.vm10.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:11.409 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:11.409 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:11.409 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:11.409 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:11.409 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:11.409 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:11.409 INFO:teuthology.orchestra.run.vm10.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:11.409 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:11.426 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-08T23:27:11.427 INFO:teuthology.orchestra.run.vm10.stdout: libcephfs-dev* libcephfs2* 2026-03-08T23:27:11.539 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:27:11.539 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:27:11.589 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:11.614 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-08T23:27:11.614 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-08T23:27:11.627 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:27:11.660 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117495 files and directories currently installed.) 2026-03-08T23:27:11.663 INFO:teuthology.orchestra.run.vm10.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:11.678 INFO:teuthology.orchestra.run.vm10.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:11.706 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:27:11.792 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:11.792 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:11.793 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:11.793 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:11.794 INFO:teuthology.orchestra.run.vm04.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:11.794 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:11.794 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:11.794 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:11.794 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:11.794 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:11.794 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:11.794 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:11.794 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:11.794 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:11.794 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:11.794 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:11.794 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:11.794 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:11.794 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:11.819 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-08T23:27:11.820 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-dev* libcephfs2* 2026-03-08T23:27:11.845 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:27:11.846 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:27:11.988 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:11.988 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:11.988 INFO:teuthology.orchestra.run.vm02.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:11.989 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:11.989 INFO:teuthology.orchestra.run.vm02.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:11.989 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:11.989 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:11.989 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:11.989 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:11.989 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:11.989 INFO:teuthology.orchestra.run.vm02.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:11.989 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:11.989 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:11.989 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:11.989 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:11.989 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:11.989 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:11.989 INFO:teuthology.orchestra.run.vm02.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:11.989 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:11.996 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-08T23:27:11.997 INFO:teuthology.orchestra.run.vm02.stdout: libcephfs-dev* libcephfs2* 2026-03-08T23:27:12.025 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-08T23:27:12.025 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-08T23:27:12.072 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117495 files and directories currently installed.) 2026-03-08T23:27:12.075 INFO:teuthology.orchestra.run.vm04.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:12.089 INFO:teuthology.orchestra.run.vm04.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:12.114 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:27:12.154 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-08T23:27:12.154 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-08T23:27:12.204 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117495 files and directories currently installed.) 2026-03-08T23:27:12.208 INFO:teuthology.orchestra.run.vm02.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:12.220 INFO:teuthology.orchestra.run.vm02.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:12.247 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:27:13.024 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:13.060 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:27:13.206 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:13.242 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:27:13.283 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:27:13.283 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:27:13.299 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:13.334 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:27:13.468 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:27:13.469 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:27:13.524 INFO:teuthology.orchestra.run.vm10.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-08T23:27:13.524 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:13.524 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:13.524 INFO:teuthology.orchestra.run.vm10.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:13.525 INFO:teuthology.orchestra.run.vm10.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:13.525 INFO:teuthology.orchestra.run.vm10.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:13.525 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:13.525 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:13.525 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:13.525 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:13.525 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:13.525 INFO:teuthology.orchestra.run.vm10.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:13.525 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:13.526 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:13.526 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:13.526 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:13.526 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:13.526 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:13.526 INFO:teuthology.orchestra.run.vm10.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:13.526 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:13.558 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:13.558 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:13.571 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:27:13.571 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:27:13.592 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:27:13.750 INFO:teuthology.orchestra.run.vm04.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-08T23:27:13.750 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:13.750 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:13.750 INFO:teuthology.orchestra.run.vm04.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:13.751 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:13.751 INFO:teuthology.orchestra.run.vm04.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:13.751 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:13.751 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:13.751 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:13.751 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:13.751 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:13.751 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:13.751 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:13.751 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:13.751 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:13.751 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:13.751 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:13.752 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:13.752 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:13.752 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:13.786 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:13.786 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:13.820 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:27:13.840 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:27:13.841 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:27:13.842 INFO:teuthology.orchestra.run.vm02.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-08T23:27:13.842 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:13.843 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:13.843 INFO:teuthology.orchestra.run.vm02.stdout: libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:27:13.843 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph lua-any lua-sec 2026-03-08T23:27:13.843 INFO:teuthology.orchestra.run.vm02.stdout: lua-socket lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:13.843 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:13.843 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:13.843 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:13.843 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:13.844 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:13.844 INFO:teuthology.orchestra.run.vm02.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:13.844 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:13.844 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:13.844 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:13.844 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:13.844 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:13.844 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:13.844 INFO:teuthology.orchestra.run.vm02.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:13.844 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:13.869 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:13.869 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:13.904 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:27:14.013 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:14.014 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:14.014 INFO:teuthology.orchestra.run.vm10.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-08T23:27:14.014 INFO:teuthology.orchestra.run.vm10.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-08T23:27:14.014 INFO:teuthology.orchestra.run.vm10.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T23:27:14.015 INFO:teuthology.orchestra.run.vm10.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T23:27:14.015 INFO:teuthology.orchestra.run.vm10.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:14.015 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:14.015 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:14.015 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:14.015 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:14.015 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:14.015 INFO:teuthology.orchestra.run.vm10.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:14.015 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:14.015 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:14.015 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:14.015 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:14.015 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:14.015 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:14.015 INFO:teuthology.orchestra.run.vm10.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:14.015 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:14.022 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-08T23:27:14.022 INFO:teuthology.orchestra.run.vm10.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-08T23:27:14.022 INFO:teuthology.orchestra.run.vm10.stdout: qemu-block-extra* rbd-fuse* 2026-03-08T23:27:14.047 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:27:14.048 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:27:14.112 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:27:14.113 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:27:14.197 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-08T23:27:14.197 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-08T23:27:14.241 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117480 files and directories currently installed.) 2026-03-08T23:27:14.243 INFO:teuthology.orchestra.run.vm10.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:14.286 INFO:teuthology.orchestra.run.vm10.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:14.295 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:14.295 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:14.295 INFO:teuthology.orchestra.run.vm02.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-08T23:27:14.295 INFO:teuthology.orchestra.run.vm02.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-08T23:27:14.295 INFO:teuthology.orchestra.run.vm02.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T23:27:14.296 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T23:27:14.296 INFO:teuthology.orchestra.run.vm02.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:14.296 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:14.296 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:14.296 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:14.296 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:14.296 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:14.296 INFO:teuthology.orchestra.run.vm02.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:14.296 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:14.296 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:14.296 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:14.296 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:14.296 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:14.296 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:14.296 INFO:teuthology.orchestra.run.vm02.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:14.296 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:14.299 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:14.299 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:14.300 INFO:teuthology.orchestra.run.vm04.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-08T23:27:14.300 INFO:teuthology.orchestra.run.vm04.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-08T23:27:14.300 INFO:teuthology.orchestra.run.vm04.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T23:27:14.301 INFO:teuthology.orchestra.run.vm10.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:14.301 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T23:27:14.301 INFO:teuthology.orchestra.run.vm04.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:14.301 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:14.301 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:14.301 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:14.301 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:14.302 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:14.302 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:14.302 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:14.302 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:14.302 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:14.302 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:14.302 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:14.302 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:14.302 INFO:teuthology.orchestra.run.vm04.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:14.302 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:14.304 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-08T23:27:14.304 INFO:teuthology.orchestra.run.vm02.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-08T23:27:14.304 INFO:teuthology.orchestra.run.vm02.stdout: qemu-block-extra* rbd-fuse* 2026-03-08T23:27:14.313 INFO:teuthology.orchestra.run.vm10.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-08T23:27:14.317 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-08T23:27:14.317 INFO:teuthology.orchestra.run.vm04.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-08T23:27:14.318 INFO:teuthology.orchestra.run.vm04.stdout: qemu-block-extra* rbd-fuse* 2026-03-08T23:27:14.473 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-08T23:27:14.473 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-08T23:27:14.503 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-08T23:27:14.503 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-08T23:27:14.509 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117480 files and directories currently installed.) 2026-03-08T23:27:14.511 INFO:teuthology.orchestra.run.vm02.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:14.523 INFO:teuthology.orchestra.run.vm02.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:14.537 INFO:teuthology.orchestra.run.vm02.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:14.544 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117480 files and directories currently installed.) 2026-03-08T23:27:14.547 INFO:teuthology.orchestra.run.vm04.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:14.551 INFO:teuthology.orchestra.run.vm02.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-08T23:27:14.559 INFO:teuthology.orchestra.run.vm04.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:14.571 INFO:teuthology.orchestra.run.vm04.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:14.593 INFO:teuthology.orchestra.run.vm04.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-08T23:27:14.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:27:14 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.657 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:14 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.657 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:14 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.657 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:14 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.657 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:14 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.782 INFO:teuthology.orchestra.run.vm10.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:14.796 INFO:teuthology.orchestra.run.vm10.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:14.811 INFO:teuthology.orchestra.run.vm10.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:14.838 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:27:14.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:27:14 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.874 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:14 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.874 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:14 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.874 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:14 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.894 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:27:14.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:27:14 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.894 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:27:14 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.894 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:14 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.895 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:14 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:27:14 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.964 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117429 files and directories currently installed.) 2026-03-08T23:27:14.965 INFO:teuthology.orchestra.run.vm10.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-08T23:27:14.992 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:27:14 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.992 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:14 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.993 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:14 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.993 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:14 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.993 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:14 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:14.999 INFO:teuthology.orchestra.run.vm02.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:15.009 INFO:teuthology.orchestra.run.vm02.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:15.021 INFO:teuthology.orchestra.run.vm02.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:15.038 INFO:teuthology.orchestra.run.vm04.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:15.045 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:27:15.051 INFO:teuthology.orchestra.run.vm04.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:15.065 INFO:teuthology.orchestra.run.vm04.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:15.094 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:27:15.099 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:27:15.157 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:27:15.183 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117429 files and directories currently installed.) 2026-03-08T23:27:15.185 INFO:teuthology.orchestra.run.vm02.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-08T23:27:15.214 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:27:14 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.214 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:27:14 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.214 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:14 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.215 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:27:14 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.215 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:14 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.229 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117429 files and directories currently installed.) 2026-03-08T23:27:15.231 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-08T23:27:15.258 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:27:14 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.259 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:14 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.259 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:14 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.259 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:14 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.354 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:27:15 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.354 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:15 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.354 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:15 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.355 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:15 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.355 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:15 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.563 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:27:15 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.563 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:27:15 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.563 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:15 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.563 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:27:15 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.564 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:15 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.603 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:27:15 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.603 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:15 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.603 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:15 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.603 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:15 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.657 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:27:15 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.657 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:15 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.657 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:15 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.657 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:15 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.658 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:15 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.874 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:27:15 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.874 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:15 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.874 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:15 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.875 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:15 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:27:15 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:27:15 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.894 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:15 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.894 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:27:15 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:15.894 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:15 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:16.807 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:16.849 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:27:16.937 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:16.969 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:16.978 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:27:17.006 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:27:17.098 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:27:17.099 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:27:17.212 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:27:17.212 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:27:17.233 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:27:17.234 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout:Package 'librbd1' is not installed, so not removed 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:17.318 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:17.319 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:17.319 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:17.319 INFO:teuthology.orchestra.run.vm04.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:17.319 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:17.329 INFO:teuthology.orchestra.run.vm10.stdout:Package 'librbd1' is not installed, so not removed 2026-03-08T23:27:17.329 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:17.329 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:17.329 INFO:teuthology.orchestra.run.vm10.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-08T23:27:17.329 INFO:teuthology.orchestra.run.vm10.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-08T23:27:17.329 INFO:teuthology.orchestra.run.vm10.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T23:27:17.330 INFO:teuthology.orchestra.run.vm10.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T23:27:17.330 INFO:teuthology.orchestra.run.vm10.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:17.330 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:17.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:17.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:17.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:17.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:17.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:17.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:17.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:17.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:17.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:17.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:17.330 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:17.330 INFO:teuthology.orchestra.run.vm10.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:17.330 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:17.332 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:17.333 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:17.356 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:17.356 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:17.368 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:27:17.372 INFO:teuthology.orchestra.run.vm02.stdout:Package 'librbd1' is not installed, so not removed 2026-03-08T23:27:17.372 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:17.372 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:17.372 INFO:teuthology.orchestra.run.vm02.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-08T23:27:17.372 INFO:teuthology.orchestra.run.vm02.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-08T23:27:17.372 INFO:teuthology.orchestra.run.vm02.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T23:27:17.372 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T23:27:17.372 INFO:teuthology.orchestra.run.vm02.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:17.372 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:17.373 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:17.373 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:17.373 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:17.373 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:17.373 INFO:teuthology.orchestra.run.vm02.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:17.373 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:17.373 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:17.373 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:17.373 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:17.373 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:17.373 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:17.373 INFO:teuthology.orchestra.run.vm02.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:17.373 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:17.387 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:17.387 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:17.390 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:27:17.422 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:27:17.548 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:27:17.548 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:27:17.602 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:27:17.602 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:27:17.603 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:27:17.603 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:27:17.688 INFO:teuthology.orchestra.run.vm04.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-08T23:27:17.688 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:17.689 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:17.703 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:17.704 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:17.706 DEBUG:teuthology.orchestra.run.vm04:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-08T23:27:17.766 DEBUG:teuthology.orchestra.run.vm04:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-08T23:27:17.796 INFO:teuthology.orchestra.run.vm02.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-08T23:27:17.796 INFO:teuthology.orchestra.run.vm02.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:17.797 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:17.798 INFO:teuthology.orchestra.run.vm02.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:17.798 INFO:teuthology.orchestra.run.vm02.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:17.828 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:17.828 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:17.829 DEBUG:teuthology.orchestra.run.vm02:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-08T23:27:17.840 INFO:teuthology.orchestra.run.vm10.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-08T23:27:17.840 INFO:teuthology.orchestra.run.vm10.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:27:17.840 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:17.840 INFO:teuthology.orchestra.run.vm10.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-08T23:27:17.840 INFO:teuthology.orchestra.run.vm10.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-08T23:27:17.840 INFO:teuthology.orchestra.run.vm10.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T23:27:17.841 INFO:teuthology.orchestra.run.vm10.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T23:27:17.841 INFO:teuthology.orchestra.run.vm10.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:17.841 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:17.841 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:17.841 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:17.841 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:17.842 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:17.842 INFO:teuthology.orchestra.run.vm10.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:17.842 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:17.842 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:17.842 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:17.842 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:17.842 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:17.842 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:17.842 INFO:teuthology.orchestra.run.vm10.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:17.842 INFO:teuthology.orchestra.run.vm10.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:27:17.844 DEBUG:teuthology.orchestra.run.vm02:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-08T23:27:17.844 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:27:17.870 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:27:17.871 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:17.872 DEBUG:teuthology.orchestra.run.vm10:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-08T23:27:17.886 DEBUG:teuthology.orchestra.run.vm10:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-08T23:27:17.921 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:27:17.964 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:27:17.984 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-08T23:27:17.985 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-08T23:27:18.095 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-08T23:27:18.095 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:18.095 INFO:teuthology.orchestra.run.vm04.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-08T23:27:18.095 INFO:teuthology.orchestra.run.vm04.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-08T23:27:18.095 INFO:teuthology.orchestra.run.vm04.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T23:27:18.095 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T23:27:18.096 INFO:teuthology.orchestra.run.vm04.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:18.096 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:18.096 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:18.096 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:18.096 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:18.096 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:18.096 INFO:teuthology.orchestra.run.vm04.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:18.096 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:18.096 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:18.096 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:18.096 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:18.096 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:18.096 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:18.096 INFO:teuthology.orchestra.run.vm04.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:18.110 INFO:teuthology.orchestra.run.vm10.stdout:Building dependency tree... 2026-03-08T23:27:18.110 INFO:teuthology.orchestra.run.vm10.stdout:Reading state information... 2026-03-08T23:27:18.117 INFO:teuthology.orchestra.run.vm02.stdout:Building dependency tree... 2026-03-08T23:27:18.117 INFO:teuthology.orchestra.run.vm02.stdout:Reading state information... 2026-03-08T23:27:18.288 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 83 to remove and 10 not upgraded. 2026-03-08T23:27:18.288 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 103 MB disk space will be freed. 2026-03-08T23:27:18.334 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117429 files and directories currently installed.) 2026-03-08T23:27:18.336 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:18.342 INFO:teuthology.orchestra.run.vm02.stdout:The following packages will be REMOVED: 2026-03-08T23:27:18.343 INFO:teuthology.orchestra.run.vm02.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:18.343 INFO:teuthology.orchestra.run.vm02.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-08T23:27:18.343 INFO:teuthology.orchestra.run.vm02.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-08T23:27:18.343 INFO:teuthology.orchestra.run.vm02.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T23:27:18.343 INFO:teuthology.orchestra.run.vm02.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T23:27:18.344 INFO:teuthology.orchestra.run.vm02.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:18.344 INFO:teuthology.orchestra.run.vm02.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:18.344 INFO:teuthology.orchestra.run.vm02.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:18.344 INFO:teuthology.orchestra.run.vm02.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:18.344 INFO:teuthology.orchestra.run.vm02.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:18.344 INFO:teuthology.orchestra.run.vm02.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:18.344 INFO:teuthology.orchestra.run.vm02.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:18.344 INFO:teuthology.orchestra.run.vm02.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:18.344 INFO:teuthology.orchestra.run.vm02.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:18.344 INFO:teuthology.orchestra.run.vm02.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:18.344 INFO:teuthology.orchestra.run.vm02.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:18.344 INFO:teuthology.orchestra.run.vm02.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:18.344 INFO:teuthology.orchestra.run.vm02.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:18.344 INFO:teuthology.orchestra.run.vm02.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:18.354 INFO:teuthology.orchestra.run.vm04.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-08T23:27:18.367 INFO:teuthology.orchestra.run.vm04.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-08T23:27:18.371 INFO:teuthology.orchestra.run.vm10.stdout:The following packages will be REMOVED: 2026-03-08T23:27:18.371 INFO:teuthology.orchestra.run.vm10.stdout: ceph-mgr-modules-core jq libboost-iostreams1.74.0 libboost-thread1.74.0 2026-03-08T23:27:18.371 INFO:teuthology.orchestra.run.vm10.stdout: libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 liboath0 libonig5 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: librdkafka1 libreadline-dev libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: python3-kubernetes python3-logutils python3-mako python3-natsort 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: python3-paste python3-pastedeploy python3-pastescript python3-pecan 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: python3-portend python3-prettytable python3-psutil python3-pyinotify 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: python3-threadpoolctl python3-waitress python3-wcwidth python3-webob 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T23:27:18.372 INFO:teuthology.orchestra.run.vm10.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T23:27:18.377 INFO:teuthology.orchestra.run.vm04.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-08T23:27:18.387 INFO:teuthology.orchestra.run.vm04.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-08T23:27:18.398 INFO:teuthology.orchestra.run.vm04.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:27:18.408 INFO:teuthology.orchestra.run.vm04.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:27:18.417 INFO:teuthology.orchestra.run.vm04.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:27:18.438 INFO:teuthology.orchestra.run.vm04.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-08T23:27:18.449 INFO:teuthology.orchestra.run.vm04.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-08T23:27:18.460 INFO:teuthology.orchestra.run.vm04.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:27:18.470 INFO:teuthology.orchestra.run.vm04.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:27:18.481 INFO:teuthology.orchestra.run.vm04.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:27:18.492 INFO:teuthology.orchestra.run.vm04.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:27:18.503 INFO:teuthology.orchestra.run.vm04.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-08T23:27:18.514 INFO:teuthology.orchestra.run.vm04.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-08T23:27:18.525 INFO:teuthology.orchestra.run.vm04.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-08T23:27:18.537 INFO:teuthology.orchestra.run.vm04.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-08T23:27:18.543 INFO:teuthology.orchestra.run.vm02.stdout:0 upgraded, 0 newly installed, 83 to remove and 10 not upgraded. 2026-03-08T23:27:18.543 INFO:teuthology.orchestra.run.vm02.stdout:After this operation, 103 MB disk space will be freed. 2026-03-08T23:27:18.555 INFO:teuthology.orchestra.run.vm10.stdout:0 upgraded, 0 newly installed, 83 to remove and 10 not upgraded. 2026-03-08T23:27:18.555 INFO:teuthology.orchestra.run.vm10.stdout:After this operation, 103 MB disk space will be freed. 2026-03-08T23:27:18.567 INFO:teuthology.orchestra.run.vm04.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-08T23:27:18.578 INFO:teuthology.orchestra.run.vm04.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-08T23:27:18.584 INFO:teuthology.orchestra.run.vm02.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117429 files and directories currently installed.) 2026-03-08T23:27:18.587 INFO:teuthology.orchestra.run.vm02.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:18.588 INFO:teuthology.orchestra.run.vm04.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-08T23:27:18.595 INFO:teuthology.orchestra.run.vm10.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117429 files and directories currently installed.) 2026-03-08T23:27:18.598 INFO:teuthology.orchestra.run.vm10.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:18.599 INFO:teuthology.orchestra.run.vm04.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-08T23:27:18.603 INFO:teuthology.orchestra.run.vm02.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-08T23:27:18.609 INFO:teuthology.orchestra.run.vm04.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-08T23:27:18.615 INFO:teuthology.orchestra.run.vm10.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-08T23:27:18.616 INFO:teuthology.orchestra.run.vm02.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-08T23:27:18.619 INFO:teuthology.orchestra.run.vm04.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-08T23:27:18.628 INFO:teuthology.orchestra.run.vm10.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-08T23:27:18.629 INFO:teuthology.orchestra.run.vm02.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-08T23:27:18.629 INFO:teuthology.orchestra.run.vm04.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-08T23:27:18.640 INFO:teuthology.orchestra.run.vm04.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-08T23:27:18.640 INFO:teuthology.orchestra.run.vm10.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-08T23:27:18.641 INFO:teuthology.orchestra.run.vm02.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-08T23:27:18.651 INFO:teuthology.orchestra.run.vm04.stdout:Removing lua-any (27ubuntu1) ... 2026-03-08T23:27:18.653 INFO:teuthology.orchestra.run.vm10.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-08T23:27:18.653 INFO:teuthology.orchestra.run.vm02.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:27:18.661 INFO:teuthology.orchestra.run.vm04.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-08T23:27:18.665 INFO:teuthology.orchestra.run.vm02.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:27:18.666 INFO:teuthology.orchestra.run.vm10.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:27:18.672 INFO:teuthology.orchestra.run.vm04.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-08T23:27:18.678 INFO:teuthology.orchestra.run.vm02.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:27:18.679 INFO:teuthology.orchestra.run.vm10.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:27:18.684 INFO:teuthology.orchestra.run.vm04.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-08T23:27:18.691 INFO:teuthology.orchestra.run.vm10.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:27:18.697 INFO:teuthology.orchestra.run.vm02.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-08T23:27:18.701 INFO:teuthology.orchestra.run.vm04.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-08T23:27:18.712 INFO:teuthology.orchestra.run.vm02.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-08T23:27:18.713 INFO:teuthology.orchestra.run.vm10.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-08T23:27:18.727 INFO:teuthology.orchestra.run.vm02.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:27:18.728 INFO:teuthology.orchestra.run.vm10.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-08T23:27:18.739 INFO:teuthology.orchestra.run.vm02.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:27:18.742 INFO:teuthology.orchestra.run.vm10.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:27:18.750 INFO:teuthology.orchestra.run.vm02.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:27:18.754 INFO:teuthology.orchestra.run.vm10.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:27:18.852 INFO:teuthology.orchestra.run.vm02.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:27:18.858 INFO:teuthology.orchestra.run.vm10.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:27:18.866 INFO:teuthology.orchestra.run.vm02.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-08T23:27:18.875 INFO:teuthology.orchestra.run.vm10.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:27:18.880 INFO:teuthology.orchestra.run.vm02.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-08T23:27:18.887 INFO:teuthology.orchestra.run.vm10.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-08T23:27:18.891 INFO:teuthology.orchestra.run.vm02.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-08T23:27:18.899 INFO:teuthology.orchestra.run.vm10.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-08T23:27:18.902 INFO:teuthology.orchestra.run.vm02.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-08T23:27:18.910 INFO:teuthology.orchestra.run.vm10.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-08T23:27:18.921 INFO:teuthology.orchestra.run.vm10.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-08T23:27:18.927 INFO:teuthology.orchestra.run.vm02.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-08T23:27:18.940 INFO:teuthology.orchestra.run.vm02.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-08T23:27:18.946 INFO:teuthology.orchestra.run.vm10.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-08T23:27:18.952 INFO:teuthology.orchestra.run.vm02.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-08T23:27:18.959 INFO:teuthology.orchestra.run.vm10.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-08T23:27:18.962 INFO:teuthology.orchestra.run.vm02.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-08T23:27:18.971 INFO:teuthology.orchestra.run.vm10.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-08T23:27:18.972 INFO:teuthology.orchestra.run.vm02.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-08T23:27:18.982 INFO:teuthology.orchestra.run.vm02.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-08T23:27:18.983 INFO:teuthology.orchestra.run.vm10.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-08T23:27:18.995 INFO:teuthology.orchestra.run.vm02.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-08T23:27:18.995 INFO:teuthology.orchestra.run.vm10.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-08T23:27:19.008 INFO:teuthology.orchestra.run.vm02.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-08T23:27:19.009 INFO:teuthology.orchestra.run.vm10.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-08T23:27:19.020 INFO:teuthology.orchestra.run.vm02.stdout:Removing lua-any (27ubuntu1) ... 2026-03-08T23:27:19.023 INFO:teuthology.orchestra.run.vm10.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-08T23:27:19.033 INFO:teuthology.orchestra.run.vm02.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-08T23:27:19.036 INFO:teuthology.orchestra.run.vm10.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-08T23:27:19.048 INFO:teuthology.orchestra.run.vm02.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-08T23:27:19.052 INFO:teuthology.orchestra.run.vm10.stdout:Removing lua-any (27ubuntu1) ... 2026-03-08T23:27:19.063 INFO:teuthology.orchestra.run.vm02.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-08T23:27:19.064 INFO:teuthology.orchestra.run.vm10.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-08T23:27:19.070 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:27:18 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.070 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:18 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.070 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:18 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.070 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:18 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.077 INFO:teuthology.orchestra.run.vm10.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-08T23:27:19.082 INFO:teuthology.orchestra.run.vm02.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-08T23:27:19.090 INFO:teuthology.orchestra.run.vm10.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-08T23:27:19.107 INFO:teuthology.orchestra.run.vm10.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-08T23:27:19.158 INFO:teuthology.orchestra.run.vm04.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-08T23:27:19.189 INFO:teuthology.orchestra.run.vm04.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-08T23:27:19.205 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-08T23:27:19.269 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-08T23:27:19.324 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-08T23:27:19.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:27:19 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.374 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:19 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.374 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:19 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.374 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:19 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.381 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-08T23:27:19.394 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:27:19 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.394 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:27:19 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.395 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:19 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.395 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:27:19 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.395 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:19 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.407 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:27:19 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.407 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:19 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.407 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:19 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.407 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:19 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.407 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:19 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.443 INFO:teuthology.orchestra.run.vm04.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-08T23:27:19.453 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-08T23:27:19.516 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-08T23:27:19.525 INFO:teuthology.orchestra.run.vm02.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-08T23:27:19.559 INFO:teuthology.orchestra.run.vm02.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-08T23:27:19.562 INFO:teuthology.orchestra.run.vm10.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-08T23:27:19.573 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-08T23:27:19.646 INFO:teuthology.orchestra.run.vm10.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-08T23:27:19.660 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-08T23:27:19.667 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-08T23:27:19.715 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-08T23:27:19.736 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-08T23:27:19.780 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-08T23:27:19.791 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-08T23:27:19.793 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-08T23:27:19.845 INFO:teuthology.orchestra.run.vm02.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-08T23:27:19.847 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-08T23:27:19.853 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-08T23:27:19.857 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-08T23:27:19.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:27:19 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.894 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:27:19 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.894 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:19 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.894 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:19 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:27:19 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.896 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:19.906 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:19 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.907 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:27:19 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.907 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:19 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.907 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:19 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.907 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:19 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:19.928 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-08T23:27:19.929 INFO:teuthology.orchestra.run.vm10.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-08T23:27:19.943 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-08T23:27:19.949 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:20.004 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-08T23:27:20.006 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-08T23:27:20.074 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-08T23:27:20.129 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-08T23:27:20.179 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-08T23:27:20.203 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-08T23:27:20.232 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-08T23:27:20.266 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-08T23:27:20.288 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-08T23:27:20.297 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-08T23:27:20.326 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:20.341 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-08T23:27:20.357 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-08T23:27:20.374 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:20.392 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-08T23:27:20.411 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:20.426 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-08T23:27:20.442 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-08T23:27:20.468 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:27:20.487 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-08T23:27:20.526 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-08T23:27:20.544 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-08T23:27:20.573 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-08T23:27:20.592 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-08T23:27:20.599 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-08T23:27:20.640 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-08T23:27:20.642 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-08T23:27:20.655 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-08T23:27:20.693 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-08T23:27:20.693 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-08T23:27:20.708 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-08T23:27:20.744 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-08T23:27:20.746 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-08T23:27:20.763 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-08T23:27:20.795 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-08T23:27:20.800 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-08T23:27:20.813 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-08T23:27:20.844 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-08T23:27:20.861 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-08T23:27:20.862 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-08T23:27:20.911 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-08T23:27:20.912 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-08T23:27:20.962 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-08T23:27:20.964 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-08T23:27:20.971 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-08T23:27:21.015 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-08T23:27:21.034 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-08T23:27:21.065 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-08T23:27:21.087 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-08T23:27:21.090 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-08T23:27:21.112 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-08T23:27:21.141 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-08T23:27:21.157 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-08T23:27:21.168 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-rsa (4.8-1) ... 2026-03-08T23:27:21.192 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-08T23:27:21.204 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-08T23:27:21.222 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-08T23:27:21.255 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-08T23:27:21.260 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-08T23:27:21.279 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-08T23:27:21.309 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-08T23:27:21.312 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-08T23:27:21.330 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-08T23:27:21.362 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-08T23:27:21.374 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-08T23:27:21.377 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-08T23:27:21.391 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-08T23:27:21.411 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-08T23:27:21.426 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-08T23:27:21.442 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-08T23:27:21.463 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-08T23:27:21.480 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-08T23:27:21.493 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-08T23:27:21.513 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-08T23:27:21.530 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-08T23:27:21.545 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-08T23:27:21.569 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-rsa (4.8-1) ... 2026-03-08T23:27:21.582 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-08T23:27:21.607 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-08T23:27:21.620 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-08T23:27:21.635 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-08T23:27:21.655 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-08T23:27:21.670 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-08T23:27:21.692 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-rsa (4.8-1) ... 2026-03-08T23:27:21.709 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-08T23:27:21.728 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-08T23:27:21.746 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-08T23:27:21.763 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-08T23:27:21.779 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-08T23:27:21.794 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-08T23:27:21.799 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-08T23:27:21.811 INFO:teuthology.orchestra.run.vm04.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-08T23:27:21.834 INFO:teuthology.orchestra.run.vm04.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-08T23:27:21.843 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-08T23:27:21.856 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-08T23:27:21.890 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-08T23:27:21.904 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-08T23:27:21.920 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-08T23:27:21.942 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-08T23:27:21.973 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-08T23:27:22.002 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-08T23:27:22.022 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-08T23:27:22.052 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-08T23:27:22.080 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-08T23:27:22.109 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-08T23:27:22.124 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:27:21 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.124 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:21 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.124 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:21 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.124 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:21 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.154 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-08T23:27:22.171 INFO:teuthology.orchestra.run.vm02.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-08T23:27:22.207 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-08T23:27:22.223 INFO:teuthology.orchestra.run.vm02.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-08T23:27:22.248 INFO:teuthology.orchestra.run.vm02.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-08T23:27:22.253 INFO:teuthology.orchestra.run.vm04.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-08T23:27:22.263 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-08T23:27:22.265 INFO:teuthology.orchestra.run.vm04.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-08T23:27:22.286 INFO:teuthology.orchestra.run.vm04.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-08T23:27:22.304 INFO:teuthology.orchestra.run.vm04.stdout:Removing zip (3.0-12build2) ... 2026-03-08T23:27:22.316 INFO:teuthology.orchestra.run.vm10.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-08T23:27:22.331 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:27:22.365 INFO:teuthology.orchestra.run.vm10.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-08T23:27:22.408 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-08T23:27:22.415 INFO:teuthology.orchestra.run.vm10.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-08T23:27:22.416 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:27:22.610 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:27:22 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.610 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:27:22 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.610 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:22 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.610 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:27:22 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.610 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:22 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.624 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 08 23:27:22 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.624 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 08 23:27:22 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.624 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 08 23:27:22 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.624 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 08 23:27:22 vm04 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.690 INFO:teuthology.orchestra.run.vm02.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-08T23:27:22.703 INFO:teuthology.orchestra.run.vm02.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-08T23:27:22.726 INFO:teuthology.orchestra.run.vm02.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-08T23:27:22.738 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:22 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.738 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:27:22 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.738 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:22 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.738 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:22 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.739 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:22 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.744 INFO:teuthology.orchestra.run.vm02.stdout:Removing zip (3.0-12build2) ... 2026-03-08T23:27:22.770 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:27:22.828 INFO:teuthology.orchestra.run.vm10.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-08T23:27:22.831 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-08T23:27:22.839 INFO:teuthology.orchestra.run.vm02.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:27:22.841 INFO:teuthology.orchestra.run.vm10.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-08T23:27:22.863 INFO:teuthology.orchestra.run.vm10.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-08T23:27:22.882 INFO:teuthology.orchestra.run.vm10.stdout:Removing zip (3.0-12build2) ... 2026-03-08T23:27:22.894 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 08 23:27:22 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.894 INFO:journalctl@ceph.mgr.x.vm02.stdout:Mar 08 23:27:22 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.895 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 08 23:27:22 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.895 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 08 23:27:22 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.895 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:22 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:22.912 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:27:23.016 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-08T23:27:23.027 INFO:teuthology.orchestra.run.vm10.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:27:23.157 INFO:journalctl@ceph.osd.5.vm10.stdout:Mar 08 23:27:22 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:23.157 INFO:journalctl@ceph.osd.7.vm10.stdout:Mar 08 23:27:22 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:23.157 INFO:journalctl@ceph.mon.c.vm10.stdout:Mar 08 23:27:22 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:23.157 INFO:journalctl@ceph.osd.6.vm10.stdout:Mar 08 23:27:22 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:23.157 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:22 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:23.414 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:23.417 DEBUG:teuthology.parallel:result is None 2026-03-08T23:27:23.823 INFO:teuthology.orchestra.run.vm02.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:23.826 DEBUG:teuthology.parallel:result is None 2026-03-08T23:27:24.222 INFO:teuthology.orchestra.run.vm10.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:27:24.225 DEBUG:teuthology.parallel:result is None 2026-03-08T23:27:24.225 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm02.local 2026-03-08T23:27:24.225 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm04.local 2026-03-08T23:27:24.226 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm10.local 2026-03-08T23:27:24.226 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-08T23:27:24.226 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-08T23:27:24.226 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-08T23:27:24.233 DEBUG:teuthology.orchestra.run.vm02:> sudo apt-get update 2026-03-08T23:27:24.233 DEBUG:teuthology.orchestra.run.vm04:> sudo apt-get update 2026-03-08T23:27:24.277 DEBUG:teuthology.orchestra.run.vm10:> sudo apt-get update 2026-03-08T23:27:24.413 INFO:teuthology.orchestra.run.vm04.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-08T23:27:24.416 INFO:teuthology.orchestra.run.vm04.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-08T23:27:24.417 INFO:teuthology.orchestra.run.vm02.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-08T23:27:24.418 INFO:teuthology.orchestra.run.vm02.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-08T23:27:24.423 INFO:teuthology.orchestra.run.vm04.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-08T23:27:24.427 INFO:teuthology.orchestra.run.vm02.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-08T23:27:24.471 INFO:teuthology.orchestra.run.vm10.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-08T23:27:24.475 INFO:teuthology.orchestra.run.vm10.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-08T23:27:24.484 INFO:teuthology.orchestra.run.vm10.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-08T23:27:24.761 INFO:teuthology.orchestra.run.vm02.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-08T23:27:24.783 INFO:teuthology.orchestra.run.vm04.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-08T23:27:24.855 INFO:teuthology.orchestra.run.vm10.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-08T23:27:25.634 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-08T23:27:25.650 INFO:teuthology.orchestra.run.vm02.stdout:Reading package lists... 2026-03-08T23:27:25.651 DEBUG:teuthology.parallel:result is None 2026-03-08T23:27:25.665 DEBUG:teuthology.parallel:result is None 2026-03-08T23:27:25.700 INFO:teuthology.orchestra.run.vm10.stdout:Reading package lists... 2026-03-08T23:27:25.714 DEBUG:teuthology.parallel:result is None 2026-03-08T23:27:25.714 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-08T23:27:25.717 INFO:tasks.cephadm:Teardown begin 2026-03-08T23:27:25.717 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-08T23:27:25.726 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-08T23:27:25.733 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-08T23:27:25.741 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-08T23:27:25.741 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 91105a84-1b44-11f1-9a43-e95894f13987 -- ceph mgr module disable cephadm 2026-03-08T23:27:26.883 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/mon.a/config 2026-03-08T23:27:27.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-08T23:27:27.252+0000 7f939be52640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-08T23:27:27.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-08T23:27:27.252+0000 7f939be52640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-08T23:27:27.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-08T23:27:27.252+0000 7f939be52640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-08T23:27:27.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-08T23:27:27.252+0000 7f939be52640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-08T23:27:27.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-08T23:27:27.252+0000 7f939be52640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-08T23:27:27.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-08T23:27:27.252+0000 7f939be52640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-08T23:27:27.256 INFO:teuthology.orchestra.run.vm02.stderr:2026-03-08T23:27:27.252+0000 7f939be52640 -1 monclient: keyring not found 2026-03-08T23:27:27.256 INFO:teuthology.orchestra.run.vm02.stderr:[errno 21] error connecting to the cluster 2026-03-08T23:27:27.300 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:27:27.300 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-08T23:27:27.300 DEBUG:teuthology.orchestra.run.vm02:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-08T23:27:27.304 DEBUG:teuthology.orchestra.run.vm04:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-08T23:27:27.308 DEBUG:teuthology.orchestra.run.vm10:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-08T23:27:27.312 INFO:tasks.cephadm:Stopping all daemons... 2026-03-08T23:27:27.312 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-08T23:27:27.312 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.a 2026-03-08T23:27:27.355 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.a.service' 2026-03-08T23:27:27.408 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:27:27.409 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-08T23:27:27.409 INFO:tasks.cephadm.mon.c:Stopping mon.b... 2026-03-08T23:27:27.409 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.b 2026-03-08T23:27:27.418 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.b.service' 2026-03-08T23:27:27.472 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:27:27.472 INFO:tasks.cephadm.mon.c:Stopped mon.b 2026-03-08T23:27:27.472 INFO:tasks.cephadm.mon.c:Stopping mon.c... 2026-03-08T23:27:27.472 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.c 2026-03-08T23:27:27.482 DEBUG:teuthology.orchestra.run.vm10:> sudo pkill -f 'journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@mon.c.service' 2026-03-08T23:27:27.534 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:27:27.535 INFO:tasks.cephadm.mon.c:Stopped mon.c 2026-03-08T23:27:27.535 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-08T23:27:27.535 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-91105a84-1b44-11f1-9a43-e95894f13987@mgr.x 2026-03-08T23:27:27.545 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@mgr.x.service' 2026-03-08T23:27:27.596 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:27:27.596 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-08T23:27:27.596 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-08T23:27:27.596 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.0 2026-03-08T23:27:27.646 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.0.service' 2026-03-08T23:27:27.699 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:27:27.699 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-08T23:27:27.699 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-08T23:27:27.699 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.1 2026-03-08T23:27:27.750 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.1.service' 2026-03-08T23:27:27.802 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:27:27.802 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-08T23:27:27.802 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-08T23:27:27.802 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.2 2026-03-08T23:27:27.811 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.2.service' 2026-03-08T23:27:27.860 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:27:27.861 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-08T23:27:27.861 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-08T23:27:27.861 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.3 2026-03-08T23:27:27.912 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.3.service' 2026-03-08T23:27:27.965 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:27:27.965 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-08T23:27:27.965 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-08T23:27:27.965 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.4 2026-03-08T23:27:28.016 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.4.service' 2026-03-08T23:27:28.081 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:27:28.081 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-08T23:27:28.081 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-08T23:27:28.081 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.5 2026-03-08T23:27:28.090 DEBUG:teuthology.orchestra.run.vm10:> sudo pkill -f 'journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.5.service' 2026-03-08T23:27:28.142 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:27:28.142 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-08T23:27:28.142 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-08T23:27:28.142 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.6 2026-03-08T23:27:28.194 DEBUG:teuthology.orchestra.run.vm10:> sudo pkill -f 'journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.6.service' 2026-03-08T23:27:28.248 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:27:28.248 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-08T23:27:28.248 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-08T23:27:28.248 DEBUG:teuthology.orchestra.run.vm10:> sudo systemctl stop ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.7 2026-03-08T23:27:28.300 DEBUG:teuthology.orchestra.run.vm10:> sudo pkill -f 'journalctl -f -n 0 -u ceph-91105a84-1b44-11f1-9a43-e95894f13987@osd.7.service' 2026-03-08T23:27:28.352 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:27:28.352 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-08T23:27:28.352 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 91105a84-1b44-11f1-9a43-e95894f13987 --force --keep-logs 2026-03-08T23:27:28.442 INFO:teuthology.orchestra.run.vm02.stdout:Deleting cluster with fsid: 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:27:29.801 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:29 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:30.091 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:29 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:30.091 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:29 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:30.091 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:29 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:30.091 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:29 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:30.091 INFO:journalctl@ceph.iscsi.iscsi.a.vm02.stdout:Mar 08 23:27:29 vm02 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:31.142 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 91105a84-1b44-11f1-9a43-e95894f13987 --force --keep-logs 2026-03-08T23:27:31.231 INFO:teuthology.orchestra.run.vm04.stdout:Deleting cluster with fsid: 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:27:33.542 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 91105a84-1b44-11f1-9a43-e95894f13987 --force --keep-logs 2026-03-08T23:27:33.638 INFO:teuthology.orchestra.run.vm10.stdout:Deleting cluster with fsid: 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:27:34.999 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:34 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:35.259 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:35 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:35.259 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:35 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:35.585 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:35 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:35.585 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:35 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:35.585 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:35 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:35.585 INFO:journalctl@ceph.iscsi.iscsi.b.vm10.stdout:Mar 08 23:27:35 vm10 systemd[1]: /etc/systemd/system/ceph-91105a84-1b44-11f1-9a43-e95894f13987@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:27:36.304 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-08T23:27:36.311 INFO:teuthology.orchestra.run.vm02.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-08T23:27:36.312 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:27:36.312 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-08T23:27:36.319 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-08T23:27:36.327 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-08T23:27:36.327 DEBUG:teuthology.misc:Transferring archived files from vm02:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/crash to /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/295/remote/vm02/crash 2026-03-08T23:27:36.327 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/crash -- . 2026-03-08T23:27:36.360 INFO:teuthology.orchestra.run.vm02.stderr:tar: /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/crash: Cannot open: No such file or directory 2026-03-08T23:27:36.360 INFO:teuthology.orchestra.run.vm02.stderr:tar: Error is not recoverable: exiting now 2026-03-08T23:27:36.361 DEBUG:teuthology.misc:Transferring archived files from vm04:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/crash to /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/295/remote/vm04/crash 2026-03-08T23:27:36.361 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/crash -- . 2026-03-08T23:27:36.368 INFO:teuthology.orchestra.run.vm04.stderr:tar: /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/crash: Cannot open: No such file or directory 2026-03-08T23:27:36.368 INFO:teuthology.orchestra.run.vm04.stderr:tar: Error is not recoverable: exiting now 2026-03-08T23:27:36.368 DEBUG:teuthology.misc:Transferring archived files from vm10:/var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/crash to /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/295/remote/vm10/crash 2026-03-08T23:27:36.368 DEBUG:teuthology.orchestra.run.vm10:> sudo tar c -f - -C /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/crash -- . 2026-03-08T23:27:36.375 INFO:teuthology.orchestra.run.vm10.stderr:tar: /var/lib/ceph/91105a84-1b44-11f1-9a43-e95894f13987/crash: Cannot open: No such file or directory 2026-03-08T23:27:36.375 INFO:teuthology.orchestra.run.vm10.stderr:tar: Error is not recoverable: exiting now 2026-03-08T23:27:36.376 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-08T23:27:36.376 DEBUG:teuthology.orchestra.run.vm02:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/91105a84-1b44-11f1-9a43-e95894f13987/ceph.log | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v MON_DOWN | head -n 1 2026-03-08T23:27:36.408 INFO:teuthology.orchestra.run.vm02.stderr:grep: /var/log/ceph/91105a84-1b44-11f1-9a43-e95894f13987/ceph.log: No such file or directory 2026-03-08T23:27:36.409 WARNING:tasks.cephadm:Found errors (ERR|WRN|SEC) in cluster log 2026-03-08T23:27:36.409 INFO:tasks.cephadm:Compressing logs... 2026-03-08T23:27:36.409 DEBUG:teuthology.orchestra.run.vm02:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-08T23:27:36.454 DEBUG:teuthology.orchestra.run.vm04:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-08T23:27:36.455 DEBUG:teuthology.orchestra.run.vm10:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-08T23:27:36.460 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-08T23:27:36.460 INFO:teuthology.orchestra.run.vm02.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-08T23:27:36.460 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/91105a84-1b44-11f1-9a43-e95894f13987/ceph-volume.log 2026-03-08T23:27:36.461 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/91105a84-1b44-11f1-9a43-e95894f13987/ceph-volume.log: 87.8% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-08T23:27:36.461 INFO:teuthology.orchestra.run.vm02.stderr: 60.9% -- replaced with /var/log/ceph/91105a84-1b44-11f1-9a43-e95894f13987/ceph-volume.log.gz 2026-03-08T23:27:36.461 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-08T23:27:36.461 INFO:teuthology.orchestra.run.vm02.stderr:real 0m0.006s 2026-03-08T23:27:36.461 INFO:teuthology.orchestra.run.vm02.stderr:user 0m0.009s 2026-03-08T23:27:36.461 INFO:teuthology.orchestra.run.vm02.stderr:sys 0m0.000s 2026-03-08T23:27:36.462 INFO:teuthology.orchestra.run.vm04.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-08T23:27:36.462 INFO:teuthology.orchestra.run.vm04.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-08T23:27:36.463 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/91105a84-1b44-11f1-9a43-e95894f13987/ceph-volume.log 2026-03-08T23:27:36.463 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/cephadm.log: 89.4% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-08T23:27:36.463 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/91105a84-1b44-11f1-9a43-e95894f13987/ceph-volume.log: 79.6% -- replaced with /var/log/ceph/91105a84-1b44-11f1-9a43-e95894f13987/ceph-volume.log.gz 2026-03-08T23:27:36.464 INFO:teuthology.orchestra.run.vm10.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-08T23:27:36.464 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-08T23:27:36.465 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-08T23:27:36.465 INFO:teuthology.orchestra.run.vm04.stderr:real 0m0.008s 2026-03-08T23:27:36.465 INFO:teuthology.orchestra.run.vm04.stderr:user 0m0.005s 2026-03-08T23:27:36.465 INFO:teuthology.orchestra.run.vm04.stderr:sys 0m0.006s 2026-03-08T23:27:36.465 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /var/log/ceph/91105a84-1b44-11f1-9a43-e95894f13987/ceph-volume.log 2026-03-08T23:27:36.465 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/cephadm.log: 90.3% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-08T23:27:36.465 INFO:teuthology.orchestra.run.vm10.stderr:/var/log/ceph/91105a84-1b44-11f1-9a43-e95894f13987/ceph-volume.log: 79.8% -- replaced with /var/log/ceph/91105a84-1b44-11f1-9a43-e95894f13987/ceph-volume.log.gz 2026-03-08T23:27:36.466 INFO:teuthology.orchestra.run.vm10.stderr: 2026-03-08T23:27:36.466 INFO:teuthology.orchestra.run.vm10.stderr:real 0m0.007s 2026-03-08T23:27:36.466 INFO:teuthology.orchestra.run.vm10.stderr:user 0m0.010s 2026-03-08T23:27:36.466 INFO:teuthology.orchestra.run.vm10.stderr:sys 0m0.001s 2026-03-08T23:27:36.466 INFO:tasks.cephadm:Archiving logs... 2026-03-08T23:27:36.466 DEBUG:teuthology.misc:Transferring archived files from vm02:/var/log/ceph to /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/295/remote/vm02/log 2026-03-08T23:27:36.466 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-08T23:27:36.512 DEBUG:teuthology.misc:Transferring archived files from vm04:/var/log/ceph to /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/295/remote/vm04/log 2026-03-08T23:27:36.512 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-08T23:27:36.520 DEBUG:teuthology.misc:Transferring archived files from vm10:/var/log/ceph to /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/295/remote/vm10/log 2026-03-08T23:27:36.520 DEBUG:teuthology.orchestra.run.vm10:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-08T23:27:36.527 INFO:tasks.cephadm:Removing cluster... 2026-03-08T23:27:36.527 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 91105a84-1b44-11f1-9a43-e95894f13987 --force 2026-03-08T23:27:36.643 INFO:teuthology.orchestra.run.vm02.stdout:Deleting cluster with fsid: 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:27:37.723 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 91105a84-1b44-11f1-9a43-e95894f13987 --force 2026-03-08T23:27:37.815 INFO:teuthology.orchestra.run.vm04.stdout:Deleting cluster with fsid: 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:27:38.881 DEBUG:teuthology.orchestra.run.vm10:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 91105a84-1b44-11f1-9a43-e95894f13987 --force 2026-03-08T23:27:38.971 INFO:teuthology.orchestra.run.vm10.stdout:Deleting cluster with fsid: 91105a84-1b44-11f1-9a43-e95894f13987 2026-03-08T23:27:40.035 INFO:tasks.cephadm:Removing cephadm ... 2026-03-08T23:27:40.035 DEBUG:teuthology.orchestra.run.vm02:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-08T23:27:40.038 DEBUG:teuthology.orchestra.run.vm04:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-08T23:27:40.042 DEBUG:teuthology.orchestra.run.vm10:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-08T23:27:40.045 INFO:tasks.cephadm:Teardown complete 2026-03-08T23:27:40.045 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-08T23:27:40.047 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-08T23:27:40.047 DEBUG:teuthology.orchestra.run.vm02:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-08T23:27:40.082 DEBUG:teuthology.orchestra.run.vm04:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-08T23:27:40.085 DEBUG:teuthology.orchestra.run.vm10:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout: remote refid st t when poll reach delay offset jitter 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout:============================================================================== 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout:-server1b.meinbe 131.188.3.222 2 u 53 64 377 23.513 -0.090 0.049 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout:+static.222.16.4 35.73.197.144 2 u 47 64 377 0.356 +0.160 0.084 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout:+time.netzwerge. 31.209.85.243 2 u 44 64 377 33.255 +0.015 0.065 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout:-node-1.infogral 168.239.11.197 2 u 40 64 377 23.571 -0.132 0.122 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout:-ntp3.adminforge 131.188.3.220 2 u 41 64 377 24.984 -0.075 0.083 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout:#hatkeininter.ne 237.17.204.95 2 u 36 64 377 25.005 +0.011 0.115 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout:#ntp.ntstime.org 131.188.3.222 2 u 43 64 377 28.245 -2.368 0.076 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout:*ntp1.aew1.soe.a .GPS. 1 u 36 64 377 25.257 +0.073 0.071 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout:-185.125.190.57 194.121.207.249 2 u 56 128 377 31.117 +0.727 1.058 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout:-77.90.0.148 (14 131.188.3.220 2 u 44 128 377 22.846 +1.166 0.298 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout:-47.ip-51-75-67. 185.248.188.98 2 u 30 64 377 21.227 +0.485 0.231 2026-03-08T23:27:40.531 INFO:teuthology.orchestra.run.vm02.stdout:+185.125.190.58 145.238.80.80 2 u 59 64 377 31.127 +0.240 0.092 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout: remote refid st t when poll reach delay offset jitter 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout:============================================================================== 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout:+server1b.meinbe 131.188.3.222 2 u 41 64 377 23.544 +0.179 0.419 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout:-node-1.infogral 168.239.11.197 2 u 47 64 377 23.543 -0.333 0.349 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout:-time.netzwerge. 31.209.85.243 2 u 44 64 377 33.164 +0.138 0.581 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout:+static.222.16.4 35.73.197.144 2 u 51 64 377 0.392 +0.511 0.853 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout:-ntp3.adminforge 131.188.3.220 2 u 31 64 377 24.969 +0.069 0.361 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout:#77.90.0.148 (14 131.188.3.220 2 u 42 64 377 23.045 +1.397 0.383 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout:-47.ip-51-75-67. 185.248.188.98 2 u 44 64 377 21.179 +0.122 0.440 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout:-185.125.190.58 145.238.80.80 2 u 62 128 377 31.980 -0.502 0.512 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout:-185.125.190.57 194.121.207.249 2 u 68 128 377 33.354 +1.239 0.337 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout:*ntp1.aew1.soe.a .GPS. 1 u 34 64 377 25.293 +0.198 0.319 2026-03-08T23:27:40.542 INFO:teuthology.orchestra.run.vm04.stdout:-185.125.190.56 79.243.60.50 2 u 59 128 377 33.348 +0.744 0.368 2026-03-08T23:27:40.543 INFO:teuthology.orchestra.run.vm10.stdout: remote refid st t when poll reach delay offset jitter 2026-03-08T23:27:40.543 INFO:teuthology.orchestra.run.vm10.stdout:============================================================================== 2026-03-08T23:27:40.543 INFO:teuthology.orchestra.run.vm10.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:27:40.543 INFO:teuthology.orchestra.run.vm10.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:27:40.543 INFO:teuthology.orchestra.run.vm10.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:27:40.543 INFO:teuthology.orchestra.run.vm10.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:27:40.543 INFO:teuthology.orchestra.run.vm10.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:27:40.543 INFO:teuthology.orchestra.run.vm10.stdout:+time.netzwerge. 31.209.85.243 2 u 39 64 377 33.170 +0.284 0.564 2026-03-08T23:27:40.543 INFO:teuthology.orchestra.run.vm10.stdout:-baernet.net 192.53.103.108 2 u 45 64 377 23.524 -0.162 0.240 2026-03-08T23:27:40.543 INFO:teuthology.orchestra.run.vm10.stdout:+server1b.meinbe 131.188.3.222 2 u 36 64 377 23.524 +0.134 0.365 2026-03-08T23:27:40.543 INFO:teuthology.orchestra.run.vm10.stdout:*47.ip-51-75-67. 185.248.188.98 2 u 38 64 377 21.168 +0.121 0.417 2026-03-08T23:27:40.543 INFO:teuthology.orchestra.run.vm10.stdout:+ntp3.adminforge 131.188.3.220 2 u 32 64 377 24.971 +0.180 0.685 2026-03-08T23:27:40.543 INFO:teuthology.orchestra.run.vm10.stdout:-mail.sassmann.n 192.53.103.103 2 u 99 128 377 23.635 -0.342 0.239 2026-03-08T23:27:40.543 INFO:teuthology.orchestra.run.vm10.stdout:-185.125.190.57 194.121.207.249 2 u 61 64 377 31.944 +0.452 0.310 2026-03-08T23:27:40.543 INFO:teuthology.orchestra.run.vm10.stdout:-185.125.190.56 79.243.60.50 2 u 56 64 377 31.980 +0.542 0.416 2026-03-08T23:27:40.543 INFO:teuthology.orchestra.run.vm10.stdout:+185.125.190.58 145.238.80.80 2 u 52 64 377 33.301 +0.120 0.405 2026-03-08T23:27:40.544 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-08T23:27:40.546 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-08T23:27:40.546 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-08T23:27:40.548 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-08T23:27:40.551 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-08T23:27:40.553 INFO:teuthology.task.internal:Duration was 988.277653 seconds 2026-03-08T23:27:40.553 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-08T23:27:40.555 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-08T23:27:40.555 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-08T23:27:40.556 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-08T23:27:40.558 DEBUG:teuthology.orchestra.run.vm10:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-08T23:27:40.586 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-08T23:27:40.586 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm02.local 2026-03-08T23:27:40.586 DEBUG:teuthology.orchestra.run.vm02:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-08T23:27:40.636 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm04.local 2026-03-08T23:27:40.636 DEBUG:teuthology.orchestra.run.vm04:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-08T23:27:40.649 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm10.local 2026-03-08T23:27:40.649 DEBUG:teuthology.orchestra.run.vm10:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-08T23:27:40.662 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-08T23:27:40.662 DEBUG:teuthology.orchestra.run.vm02:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-08T23:27:40.678 DEBUG:teuthology.orchestra.run.vm04:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-08T23:27:40.693 DEBUG:teuthology.orchestra.run.vm10:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-08T23:27:40.761 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-08T23:27:40.761 DEBUG:teuthology.orchestra.run.vm02:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-08T23:27:40.762 DEBUG:teuthology.orchestra.run.vm04:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-08T23:27:40.764 DEBUG:teuthology.orchestra.run.vm10:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-08T23:27:40.769 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-08T23:27:40.769 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-08T23:27:40.769 INFO:teuthology.orchestra.run.vm02.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-08T23:27:40.769 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-08T23:27:40.770 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-08T23:27:40.770 INFO:teuthology.orchestra.run.vm02.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-08T23:27:40.770 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-08T23:27:40.770 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gzgzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-08T23:27:40.770 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-08T23:27:40.771 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-08T23:27:40.771 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-08T23:27:40.772 INFO:teuthology.orchestra.run.vm10.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-08T23:27:40.772 INFO:teuthology.orchestra.run.vm10.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-08T23:27:40.772 INFO:teuthology.orchestra.run.vm10.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-08T23:27:40.772 INFO:teuthology.orchestra.run.vm10.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-08T23:27:40.779 INFO:teuthology.orchestra.run.vm04.stderr: 90.8% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-08T23:27:40.779 INFO:teuthology.orchestra.run.vm02.stderr: 91.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-08T23:27:40.782 INFO:teuthology.orchestra.run.vm10.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 91.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-08T23:27:40.783 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-08T23:27:40.787 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-08T23:27:40.787 DEBUG:teuthology.orchestra.run.vm02:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-08T23:27:40.829 DEBUG:teuthology.orchestra.run.vm04:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-08T23:27:40.837 DEBUG:teuthology.orchestra.run.vm10:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-08T23:27:40.845 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-08T23:27:40.848 DEBUG:teuthology.orchestra.run.vm02:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-08T23:27:40.870 DEBUG:teuthology.orchestra.run.vm04:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-08T23:27:40.876 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern = core 2026-03-08T23:27:40.881 DEBUG:teuthology.orchestra.run.vm10:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-08T23:27:40.887 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern = core 2026-03-08T23:27:40.895 INFO:teuthology.orchestra.run.vm10.stdout:kernel.core_pattern = core 2026-03-08T23:27:40.901 DEBUG:teuthology.orchestra.run.vm02:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-08T23:27:40.928 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:27:40.928 DEBUG:teuthology.orchestra.run.vm04:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-08T23:27:40.942 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:27:40.943 DEBUG:teuthology.orchestra.run.vm10:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-08T23:27:40.947 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:27:40.947 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-08T23:27:40.949 INFO:teuthology.task.internal:Transferring archived files... 2026-03-08T23:27:40.950 DEBUG:teuthology.misc:Transferring archived files from vm02:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/295/remote/vm02 2026-03-08T23:27:40.950 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-08T23:27:40.978 DEBUG:teuthology.misc:Transferring archived files from vm04:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/295/remote/vm04 2026-03-08T23:27:40.978 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-08T23:27:40.992 DEBUG:teuthology.misc:Transferring archived files from vm10:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/295/remote/vm10 2026-03-08T23:27:40.992 DEBUG:teuthology.orchestra.run.vm10:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-08T23:27:40.999 INFO:teuthology.task.internal:Removing archive directory... 2026-03-08T23:27:41.000 DEBUG:teuthology.orchestra.run.vm02:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-08T23:27:41.022 DEBUG:teuthology.orchestra.run.vm04:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-08T23:27:41.037 DEBUG:teuthology.orchestra.run.vm10:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-08T23:27:41.043 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-08T23:27:41.045 INFO:teuthology.task.internal:Not uploading archives. 2026-03-08T23:27:41.045 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-08T23:27:41.048 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-08T23:27:41.048 DEBUG:teuthology.orchestra.run.vm02:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-08T23:27:41.066 DEBUG:teuthology.orchestra.run.vm04:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-08T23:27:41.068 INFO:teuthology.orchestra.run.vm02.stdout: 258078 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 8 23:27 /home/ubuntu/cephtest 2026-03-08T23:27:41.080 DEBUG:teuthology.orchestra.run.vm10:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-08T23:27:41.083 INFO:teuthology.orchestra.run.vm04.stdout: 258067 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 8 23:27 /home/ubuntu/cephtest 2026-03-08T23:27:41.087 INFO:teuthology.orchestra.run.vm10.stdout: 258079 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 8 23:27 /home/ubuntu/cephtest 2026-03-08T23:27:41.088 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-08T23:27:41.094 INFO:teuthology.run:Summary data: description: orch:cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} supported-container-hosts$/{ubuntu_22.04} workloads/cephadm_iscsi} duration: 988.2776529788971 failure_reason: 'Command failed on vm02 with status 1: ''CEPH_REF=master CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram -v -- /home/ubuntu/cephtest/archive/cram.client.0/*.t''' flavor: default owner: kyr sentry_event: null status: fail success: false 2026-03-08T23:27:41.094 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-08T23:27:41.115 INFO:teuthology.run:FAIL